Case Retrieval in Medical Databases by Fusing Heterogeneous Information

Department of Image et Traitement de l'Information, Institut Telecom/Telecom Bretagne, F-29200 Brest, France.
IEEE transactions on medical imaging 01/2011; 30(1):108-18. DOI: 10.1109/TMI.2010.2063711
Source: PubMed

ABSTRACT A novel content-based heterogeneous information retrieval framework, particularly well suited to browse medical databases and support new generation computer aided diagnosis (CADx) systems, is presented in this paper. It was designed to retrieve possibly incomplete documents, consisting of several images and semantic information, from a database; more complex data types such as videos can also be included in the framework. The proposed retrieval method relies on image processing, in order to characterize each individual image in a document by their digital content, and information fusion. Once the available images in a query document are characterized, a degree of match, between the query document and each reference document stored in the database, is defined for each attribute (an image feature or a metadata). A Bayesian network is used to recover missing information if need be. Finally, two novel information fusion methods are proposed to combine these degrees of match, in order to rank the reference documents by decreasing relevance for the query. In the first method, the degrees of match are fused by the Bayesian network itself. In the second method, they are fused by the Dezert-Smarandache theory: the second approach lets us model our confidence in each source of information (i.e., each attribute) and take it into account in the fusion process for a better retrieval performance. The proposed methods were applied to two heterogeneous medical databases, a diabetic retinopathy database and a mammography screening database, for computer aided diagnosis. Precisions at five of 0.809 ± 0.158 and 0.821 ± 0.177, respectively, were obtained for these two databases, which is very promising.

Download full-text


Available from: Christian Roux, Sep 25, 2015
34 Reads
    • "Thus, it is not possible to notify relevant experts on the fly to give help to a patient who is becoming depressed. Due to the continuing advance of information technology, increasing numbers of studies are utilizing information technology for the automatic inference of the status of patients and to obtain results [22]. In various research areas, ontology [23] is an important technique for representing the terminology used in certain domains. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Recently, depression becomes a general disease in the world. Most of people are not aware of the possibility of getting depressed himself in daily life. Therefore, to diagnose the possibility of getting depressed accurately becomes an important issue in health-care domain. In this paper, we build an inference model based on ontology and Bayesian Network for inferring the possibility of getting depressed and implement a prototype using mobile agent platform to show the proof-of-the-concept using mobile cloud. We proposed an ontology model to build the terminology of depression and utilize Bayesian Network to infer the probability of getting depressed. In addition, the work is implemented using multi-agents and runs on the Android platform to demonstrate the feasibility and address the implementation issues. The result shows that it can be well inferred in the depression diagnosis.
    Future Generation Computer Systems 08/2014; 43-44. DOI:10.1016/j.future.2014.05.004 · 2.79 Impact Factor
  • Source
    • "Considering related diseases have similar solutions, medical case retrieval is suggested by heterogeneous information fusion [4]. In addition, support vector machine (SVM)-based frameworks are also popular in medical image retrieval system during image filtering and dynamic features fusion [5]. "
    KSII Transactions on Internet and Information Systems 02/2014; 8(1):249-268. · 0.56 Impact Factor
  • Source
    • "Several studies have shown that combining CBIR with natural language processing (NLP) of medical unstructured text (e.g., anamnesis, diagnosis) associated with the images and hosted in the EMR may significantly improve query completion [80,81]. Case retrieval based on both image and contextual information has been used, e.g., by Quellec et al., who developed a framework for the retrieval of cases in medical databases [82]. Results from ImageCLEF show that combining textual and visual information is important for effective retrieval [74]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Radiologists' training is based on intensive practice and can be improved with the use of diagnostic training systems. However, existing systems typically require laboriously prepared training cases and lack integration into the clinical environment with a proper learning scenario. Consequently, diagnostic training systems advancing decision-making skills are not well established in radiological education. We investigated didactic concepts and appraised methods appropriate to the radiology domain, as follows: (i) Adult learning theories stress the importance of work-related practice gained in a team of problem-solvers; (ii) Case-based reasoning (CBR) parallels the human problem-solving process; (iii) Content-based image retrieval (CBIR) can be useful for computer-aided diagnosis (CAD). To overcome the known drawbacks of existing learning systems, we developed the concept of image-based case retrieval for radiological education (IBCR-RE). The IBCR-RE diagnostic training is embedded into a didactic framework based on the Seven Jump approach, which is well established in problem-based learning (PBL). In order to provide a learning environment that is as similar as possible to radiological practice, we have analysed the radiological workflow and environment. We mapped the IBCR-RE diagnostic training approach into the Image Retrieval in Medical Applications (IRMA) framework, resulting in the proposed concept of the IRMAdiag training application. IRMAdiag makes use of the modular structure of IRMA and comprises (i) the IRMA core, i.e., the IRMA CBIR engine; and (ii) the IRMAcon viewer. We propose embedding IRMAdiag into hospital information technology (IT) infrastructure using the standard protocols Digital Imaging and Communications in Medicine (DICOM) and Health Level Seven (HL7). Furthermore, we present a case description and a scheme of planned evaluations to comprehensively assess the system. The IBCR-RE paradigm incorporates a novel combination of essential aspects of diagnostic learning in radiology: (i) Provision of work-relevant experiences in a training environment integrated into the radiologist's working context; (ii) Up-to-date training cases that do not require cumbersome preparation because they are provided by routinely generated electronic medical records; (iii) Support of the way adults learn while remaining suitable for the patient- and problem-oriented nature of medicine. Future work will address unanswered questions to complete the implementation of the IRMAdiag trainer.
    BMC Medical Informatics and Decision Making 10/2011; 11:68. DOI:10.1186/1472-6947-11-68 · 1.83 Impact Factor
Show more