Adam S Rothschild’s research while affiliated with Columbia University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (5)


Inter-patient distance metrics using SNOMED CT defining relationships
  • Article

January 2007

·

68 Reads

·

78 Citations

Journal of Biomedical Informatics

Genevieve B Melton

·

·

Frances P Morrison

·

[...]

·

George Hripcsak

Patient-based similarity metrics are important case-based reasoning tools which may assist with research and patient care applications. Ontology and information content principles may be potentially helpful tools for similarity metric development. Patient cases from 1989 through 2003 from the Columbia University Medical Center data repository were converted to SNOMED CT concepts. Five metrics were implemented: (1) percent disagreement with data as an unstructured "bag of findings," (2) average links between concepts, (3) links weighted by information content with descendants, (4) links weighted by information content with term prevalence, and (5) path distance using descendants weighted by information content with descendants. Three physicians served as gold standard for 30 cases. Expert inter-rater reliability was 0.91, with rank correlations between 0.61 and 0.81, representing upper-bound performance. Expert performance compared to metrics resulted in correlations of 0.27, 0.29, 0.30, 0.30, and 0.30, respectively. Using SNOMED axis Clinical Findings alone increased correlation to 0.37. Ontology principles and information content provide useful information for similarity metrics but currently fall short of expert performance.


Leveraging systems thinking to design patient-centered clinical documentation systems

July 2005

·

42 Reads

·

21 Citations

International Journal of Medical Informatics

A hospital is a type of system, yet healthcare information technology (IT) has largely failed to view it as such. The failure to view the hospital as a system has contributed to the practice of inefficient and ineffective clinical documentation. This paper seeks to address how current clinical documentation practices reflect and reinforce inefficiency and poor patient care. It also addresses how rethinking clinical documentation and IT together may improve the entire healthcare process by promoting a more integrated and patient-centered healthcare information paradigm. Rethinking IT in support of clinical documentation from a system-oriented perspective may help improve patient care and provider communication.


Inter-rater Agreement in Physician-coded Problem Lists

February 2005

·

31 Reads

·

15 Citations

AMIA ... Annual Symposium proceedings / AMIA Symposium. AMIA Symposium

Coded problem lists will be increasingly used for many purposes in healthcare. The usefulness of coded problem lists may be limited by 1) how consistently clinicians enumerate patients' problems and 2) how consistently clinicians choose a given concept from a controlled terminology to represent a given problem. In this study, 10 physicians reviewed the same 5 clinical cases and created a coded problem list for each case using UMLS as a controlled terminology. We assessed inter-rater agreement for coded problem lists by computing the average pair-wise positive specific agreement for each case for all 10 reviewers. We also standardized problems to common terms across reviewers' lists for a given case, adjusting sequentially for synonymy, granularity, and general concept representation. Our results suggest that inter-rater agreement in unstandardized problem lists is moderate at best; standardization improves agreement, but much variability may be attributable to differences in clinicians' style and the inherent fuzziness of medical diagnosis.


Information Retrieval Performance of Probabilistically Generated, Problem-Specific Computerized Provider Order Entry Pick-Lists: A Pilot Study

January 2005

·

19 Reads

·

10 Citations

Journal of the American Medical Informatics Association

The aim of this study was to preliminarily determine the feasibility of probabilistically generating problem-specific computerized provider order entry (CPOE) pick-lists from a database of explicitly linked orders and problems from actual clinical cases. In a pilot retrospective validation, physicians reviewed internal medicine cases consisting of the admission history and physical examination and orders placed using CPOE during the first 24 hours after admission. They created coded problem lists and linked orders from individual cases to the problem for which they were most indicated. Problem-specific order pick-lists were generated by including a given order in a pick-list if the probability of linkage of order and problem (PLOP) equaled or exceeded a specified threshold. PLOP for a given linked order-problem pair was computed as its prevalence among the other cases in the experiment with the given problem. The orders that the reviewer linked to a given problem instance served as the reference standard to evaluate its system-generated pick-list. Recall, precision, and length of the pick-lists. Average recall reached a maximum of .67 with a precision of .17 and pick-list length of 31.22 at a PLOP threshold of 0. Average precision reached a maximum of .73 with a recall of .09 and pick-list length of .42 at a PLOP threshold of .9. Recall varied inversely with precision in classic information retrieval behavior. We preliminarily conclude that it is feasible to generate problem-specific CPOE pick-lists probabilistically from a database of explicitly linked orders and problems. Further research is necessary to determine the usefulness of this approach in real-world settings.


Agreement, the F-Measure, and Reliability in Information Retrieval

January 2005

·

760 Reads

·

940 Citations

Journal of the American Medical Informatics Association

Information retrieval studies that involve searching the Internet or marking phrases usually lack a well-defined number of negative cases. This prevents the use of traditional interrater reliability metrics like the kappa statistic to assess the quality of expert-generated gold standards. Such studies often quantify system performance as precision, recall, and F-measure, or as agreement. It can be shown that the average F-measure among pairs of experts is numerically identical to the average positive specific agreement among experts and that kappa approaches these measures as the number of negative cases grows large. Positive specific agreement-or the equivalent F-measure-may be an appropriate way to quantify interrater reliability and therefore to assess the reliability of a gold standard in these studies.

Citations (5)


... In a similar vein, we also plan to explore mechanisms by which order sets might be shared in an interoperable way [7, 8]. Finally, all of the sites studied for this paper used manual processes to develop order sets – some automated processes have also been described in the literature by us and other and we intend to further explore these processes both as tools to develop novel order sets as well as tools for localizing shared order sets [9, 10]. ...

Reference:

Order Sets in Computerized Physician Order Entry Systems: an Analysis of Seven Sites
Information Retrieval Performance of Probabilistically Generated, Problem-Specific Computerized Provider Order Entry Pick-Lists: A Pilot Study
  • Citing Article
  • January 2005

Journal of the American Medical Informatics Association

... The instructions given to the annotators were only the names of the entity classes to ensure that the instructions were similar to the ones given to the models annotating the data. We used one set of annotations as ground truth for our experiments and the second set to calculate the pairwise interannotator agreement [28], which was an F 1 -score of 0.73 for drugs and 0.48 for procedures. Once the models were trained on the synthetic data annotated by the GPT models, we could use the models on the test data in a secure local environment and retrieve the results. ...

Agreement, the F-Measure, and Reliability in Information Retrieval
  • Citing Article
  • January 2005

Journal of the American Medical Informatics Association

... This intersection of HCI and Health research strives to address important health challenges through technology, such as engaging patients in their care through peer-support communities [45,55,72,92,94], patient-generated health apps [29,37,103,106], and health information portals [27,54,71]. Researchers have aimed to support the documentation, coordination, and decision-making work of healthcare providers [26,57,69,89,102], and health services such as patient-centric services, safety, and outcomes [39,50,59,65]. Cross-disciplinary teams of HCI and health researchers have also explored technologies to address challenges people have in managing chronic conditions [70,78,86,107], acute events [99,120,128], or health within everyday and clinical contexts [14,58,61,62]. ...

Leveraging systems thinking to design patient-centered clinical documentation systems
  • Citing Article
  • July 2005

International Journal of Medical Informatics

... The use of standardized vocabularies has paved the way for performing matching based on terms (or semantic) similarity [12,13]. In this regard, many methods have emerged to compare pheno-clinical data encoded with SNOMED CT [14], OMIM [15], ICD-10 [16] or HPO (Human Phenotype Ontology) [17] terms [4,10,[18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36]. The widespread use of HPO in comparison methods is not coincidental, stemming from its foundation as a standardized vocabulary that structures data into a directed acyclic graph (DAG). ...

Inter-patient distance metrics using SNOMED CT defining relationships
  • Citing Article
  • January 2007

Journal of Biomedical Informatics

... The desire amongst clinicians to avoid cluttering problem list [29], and the uncertainty surrounding which specialities are responsible for updating and maintaining the problem list [30,31], may explain why certain acute presentations are less likely to be recorded on the problem list. Inter-rater agreement between clinicians as to what should, and what should not be added to the problem list is especially poor for secondary diagnoses and complications that have arisen from primary problem list terms [32][33][34]. In combination with other studies [25,27], we found that chronic conditions such as type 2 diabetes mellitus and asthma were more likely to be recorded on the problem list. ...

Inter-rater Agreement in Physician-coded Problem Lists
  • Citing Article
  • February 2005

AMIA ... Annual Symposium proceedings / AMIA Symposium. AMIA Symposium