Article

Differential diagnosis generators: an evaluation of currently available computer programs.

Department of Emergency Medicine, Lehigh Valley Health Network, Allentown, PA, USA.
Journal of General Internal Medicine (Impact Factor: 3.42). 07/2011; 27(2):213-9. DOI: 10.1007/s11606-011-1804-8
Source: PubMed

ABSTRACT Differential diagnosis (DDX) generators are computer programs that generate a DDX based on various clinical data.
We identified evaluation criteria through consensus, applied these criteria to describe the features of DDX generators, and tested performance using cases from the New England Journal of Medicine (NEJM©) and the Medical Knowledge Self Assessment Program (MKSAP©).
We first identified evaluation criteria by consensus. Then we performed Google® and Pubmed searches to identify DDX generators. To be included, DDX generators had to do the following: generate a list of potential diagnoses rather than text or article references; rank or indicate critical diagnoses that need to be considered or eliminated; accept at least two signs, symptoms or disease characteristics; provide the ability to compare the clinical presentations of diagnoses; and provide diagnoses in general medicine. The evaluation criteria were then applied to the included DDX generators. Lastly, the performance of the DDX generators was tested with findings from 20 test cases. Each case performance was scored one through five, with a score of five indicating presence of the exact diagnosis. Mean scores and confidence intervals were calculated.
Twenty three programs were initially identified and four met the inclusion criteria. These four programs were evaluated using the consensus criteria, which included the following: input method; mobile access; filtering and refinement; lab values, medications, and geography as diagnostic factors; evidence based medicine (EBM) content; references; and drug information content source. The mean scores (95% Confidence Interval) from performance testing on a five-point scale were Isabel© 3.45 (2.53, 4.37), DxPlain® 3.45 (2.63-4.27), Diagnosis Pro® 2.65 (1.75-3.55) and PEPID™ 1.70 (0.71-2.69). The number of exact matches paralleled the mean score finding.
Consensus criteria for DDX generator evaluation were developed. Application of these criteria as well as performance testing supports the use of DxPlain® and Isabel© over the other currently available DDX generators.

2 Followers
 · 
208 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: Failure or delay in diagnosis is a common preventable source of error. The authors sought to determine the frequency with which high-information clinical findings (HIFs) suggestive of a high-risk diagnosis (HRD) appear in the medical record before HRD documentation. A knowledge base from a diagnostic decision support system was used to identify HIFs for selected HRDs: lumbar disc disease, myocardial infarction, appendicitis, and colon, breast, lung, ovarian and bladder carcinomas. Two physicians reviewed at least 20 patient records retrieved from a research patient data registry for each of these eight HRDs and for age- and gender-compatible controls. Records were searched for HIFs in visit notes that were created before the HRD was established in the electronic record and in general medical visit notes for controls. 25% of records reviewed (61/243) contained HIFs in notes before the HRD was established. The mean duration between HIFs first occurring in the record and time of diagnosis ranged from 19 days for breast cancer to 2 years for bladder cancer. In three of the eight HRDs, HIFs were much less likely in control patients without the HRD. In many records of patients with an HRD, HIFs were present before the HRD was established. Reasons for delay include non-compliance with recommended follow-up, unusual presentation of a disease, and system errors (eg, lack of laboratory follow-up). The presence of HIFs in clinical records suggests a potential role for the integration of diagnostic decision support into the clinical workflow to provide reminder alerts to improve the diagnostic focus.
    Journal of the American Medical Informatics Association 03/2012; 19(4):591-6. DOI:10.1136/amiajnl-2011-000375 · 3.93 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Health information technology (HIT) systems have the potential to reduce delayed, missed or incorrect diagnoses. We describe and classify the current state of diagnostic HIT and identify future research directions. A multi-pronged literature search was conducted using PubMed, Web of Science, backwards and forwards reference searches and contributions from domain experts. We included HIT systems evaluated in clinical and experimental settings as well as previous reviews, and excluded radiology computer-aided diagnosis, monitor alerts and alarms, and studies focused on disease staging and prognosis. Articles were organised within a conceptual framework of the diagnostic process and areas requiring further investigation were identified. HIT approaches, tools and algorithms were identified and organised into 10 categories related to those assisting: (1) information gathering; (2) information organisation and display; (3) differential diagnosis generation; (4) weighing of diagnoses; (5) generation of diagnostic plan; (6) access to diagnostic reference information; (7) facilitating follow-up; (8) screening for early detection in asymptomatic patients; (9) collaborative diagnosis; and (10) facilitating diagnostic feedback to clinicians. We found many studies characterising potential interventions, but relatively few evaluating the interventions in actual clinical settings and even fewer demonstrating clinical impact. Diagnostic HIT research is still in its early stages with few demonstrations of measurable clinical impact. Future efforts need to focus on: (1) improving methods and criteria for measurement of the diagnostic process using electronic data; (2) better usability and interfaces in electronic health records; (3) more meaningful incorporation of evidence-based diagnostic protocols within clinical workflows; and (4) systematic feedback of diagnostic performance.
    BMJ quality & safety 07/2013; DOI:10.1136/bmjqs-2013-001884 · 3.28 Impact Factor
  • Source
    Journal of General Internal Medicine 12/2011; 27(2):142-4. DOI:10.1007/s11606-011-1944-x · 3.42 Impact Factor

Full-text (2 Sources)

Download
56 Downloads
Available from
May 15, 2014