Francisco Maria Calisto

Francisco Maria Calisto
University of Lisbon | UL · Department of Informatics

PhD
Human-Computer Interaction and Health Informatics enthusiast working as Researcher & Software Engineer.

About

69
Publications
0
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
554
Citations
Introduction
Francisco's primary research areas are in the fields of Human-Computer Interaction (HCI), Human-Robot Interaction (HRI), Health Informatics (HI) and Artificial Intelligence (AI). His research focuses on understanding and reducing the user burdens of interactive technologies for health professionals, patients, and people with disabilities through the design of future applications. He has designed, developed, and evaluated mobile, sensor, and collaborative applications for helping users (e.g., physicians, patients, and so on). His primary research methods involve human-centered design, technology development, and a mix of qualitative and quantitative methods. Repository: http://fmcalisto.github.io/#researcher
Additional affiliations
September 2016 - February 2017
University of Lisbon
Position
  • Lecturer
Description
  • Working as Supporting Lecturer on the Usability and Information Systems (USI) unit from Master Degree of Information and Enterprise Systems (MISE). This unit intends to give students knowledge about designing and developing user interfaces.
July 2015 - October 2017
Inesc-ID
Position
  • Research Assistant
June 2015 - October 2015
INESC-ID
Position
  • Research Trainee
Description
  • The IDSS Lab aims at designing novel processes, techniques and technology.
Education
September 2015 - February 2018
University of Lisbon
Field of study
  • Information Systems and Computer Engineering
September 2010 - February 2017
University of Lisbon
Field of study
  • Computer Science and Engineering

Publications

Publications (69)
Conference Paper
A fundamental step in medical diagnosis for patient follow-up relies on the ability of radiologists to perform a trusty diagnostic from acquired images. Basically, the diagnosis strongly depends on the visual inspection over the shape of the lesions. As datasets increase in size, such visual evaluation becomes harder. For this reason, it is crucial...
Conference Paper
Full-text available
This paper describes the field research, design, and comparative deployment of a multimodal medical imaging user interface for breast screening. The main contributions described here are threefold: 1) The design of an advanced visual interface for multimodal diagnosis of breast cancer (BreastScreening); 2) Insights from the field comparison of Sing...
Article
In this research, we take an HCI perspective on the opportunities provided by AI techniques in medical imaging, focusing on workflow efficiency and quality, preventing errors and variability of diagnosis in Breast Cancer. Starting from a holistic understanding of the clinical context, we developed BreastScreening to support Multimodality and integr...
Thesis
Full-text available
As intelligent agents advance, they promise to enhance decision-making in high-stakes domains. This thesis focuses on designing and adapting these agents for specific audiences like radiology clinicians. It explores prerequisites for integrating anthropomorphic intelligent agents as second-reader diagnostic support, their role in clinical workflows...
Research Proposal
Full-text available
This research proposal is part of the 2nd cycle Integrated Project for the MSc in Computer Science and Engineering. The focus is on developing a web-based system for semantic annotation of medical images tailored for breast cancer diagnosis that will generate lexical classification of breast lesions to assist the integration of artificial intellig...
Poster
Full-text available
This poster presents the BreastScreening-AI project, which explores the transformative role of Artificial Intelligence (AI) in breast cancer diagnosis. The project emphasizes the integration of AI into healthcare systems while ensuring patient privacy, ethical standards, and user engagement. The research focuses on how AI can enhance early detectio...
Presentation
Full-text available
The BreastScreening-AI project leverages AI to make breast cancer diagnostics faster, more accurate, and more trustworthy for clinicians. Integrating AI with medical imaging enhances decision-making while ensuring transparency and explainability so doctors can understand and trust the results. The project aims to reshape breast cancer diagnosis by...
Conference Paper
Full-text available
Intelligent agents are showing increasing promise for clinical decision-making in a variety of healthcare settings. While a substantial body of work has contributed to the best strategies to convey these agents’ decisions to clinicians, few have considered the impact of personalizing and customizing these communications on the clinicians’ performan...
Conference Paper
Full-text available
The detection and classification of breast cancer lesions with computer-aided diagnosis systems has seen a huge boost in recent years due to deep learning. However, most works focus on 2D image modalities. Dealing with 3D MRI adds new challenges, such as data insufficiency and lack of local annotations. To handle these issues, this work proposes a...
Conference Paper
Full-text available
Magnetic Resonance Imaging (MRI) is the recommended imaging modality in the diagnosis of breast cancer. However, each MRI scan comprises dozens of volumes for the radiologist to inspect, each providing its own set of information on the tissues being scanned. This paper proposes a multimodal framework that processes all the available MRI data in ord...
Poster
Full-text available
External validation of a deep learning model for breast density classification based on convolutional neural networks.
Book
Full-text available
X Consenso Nacional de Cancro da Mama, desenvolvido por dezenas de profissionais dos vários centros de todo o país e das diferentes especialidades dedicadas ao tratamento do cancro da mama, convocados para a revisão da literatura científica e elaboração das recomendações nacionais com o apoio da Sociedade Portuguesa de Senologia, nas seguintes área...
Article
Artificial intelligence has the potential to transform many application domains fundamentally. One notable example is clinical radiology. A growing number of decision-making support systems are available for lesion detection and segmentation, two fundamental steps to accomplish diagnosis and treatment planning. This paper proposes a model based on...
Article
In this paper, we developed BreastScreening-AI within two scenarios for the classification of multimodal beast images: (1) Clinician-Only; and (2) Clinician-AI. The novelty relies on the introduction of a deep learning method into a real clinical workflow for medical imaging diagnosis. We attempt to address three high-level goals in the two above s...
Thesis
Full-text available
Computer-Aided Diagnosis (CADx) systems are essential when diagnosing patients with cancer. Medical Imaging Multimodality Breast Cancer Diagnosis User Interface (MIMBCD-UI) is a Computer-aided Detection (CADe) system that allows to open, view and manipulate medical images in order to diagnose patients with breast cancer. In this work, we aim to imp...
Research Proposal
Full-text available
The goal of this thesis is the study, design, and development, as well as evaluation of novel AI-based visual representations supported by intelligent agents for medical imaging diagnosis. For this purpose, recent achievements are built on the accuracy of intelligent agents. To address the effects of varied visual representations, the intelligent a...
Data
Problems and Contributions figure for the Report of the Thesis Proposal in the Doctoral Program of Computer Science and Engineering at Instituto Superior Técnico, University of Lisbon. In this research, three scientific problems were addressed: (1) mitigating the bias between the clinical background of the medical imaging workflow and the related w...
Poster
Full-text available
In this poster, we present the development of a framework for providing a User Interface (UI) to annotate and visualize masses and calcifications of breast cancer lesions in a multimodality strategy are disclosed. The multimodality strategy supports the following image modalities: (i) MammoGraphy (MG) in both CranioCaudal (CC) and MedioLateral Obli...
Technical Report
Full-text available
In this invention proposal, we propose a method and process using a system for providing a User Interface (UI) to annotate and visualize masses and calcifications of breast cancer lesions in a Multimodality strategy. The Multimodality strategy supports the following image modalities: (i) MammoGraphy (MG) in both CranioCaudal (CC) and MedioLateral O...
Preprint
Full-text available
This paper describes the field research, design and comparative deployment of a multimodal medical imaging user interface for breast screening. The main contributions described here are threefold: 1) The design of an advanced visual interface for multimodal diagnosis of breast cancer (BreastScreening); 2) Insights from the field comparison of singl...
Data
User Testing Architecture Tree describes the architecture that will be followed in the next iterations of the project MIMBCD-UI. In this architectural tree, we have 8 definitions available, they are: (1) PROJECT; (2) User Testing and Analysis (UTA); (3) PHASE; (4) SCENARIO; (5) ANALYSIS; (6) FORMALITY; (7) ACTIVITY; and (8) TASK. The top definition...
Data
Schematic demonstrating both Front-end and Back-end components of the system with integrated medical imaging solutions. In our solution, we show the use of technologies such as NodeJS and CornerstoneJS for the Medical Imaging Viewers, as well as the Orthanc Server for the DICOM Server component.
Data
The schematic diagram for the annotations flow and JSON file generation.
Data
For clinicians, the relation between AI and ML among Medical Imaging diagnosis. More precisely, the AI models rely on rules and logic. Yet, when we miss relevant pattern conditions, the AI clinically fails inexplicably. Differently to ML methods, which can achieve pattern recognition of the patients. However, it needs to compute large amounts of da...
Data
Envolving the basic architecture of the MIMBCD-UI project with the integration of the Orthanc Server and the XAI features. This image represents an order of actions in which the user usually would have after this thesis, the red square is what this thesis is implementing.
Data
Calcification Types - The Diffuse is when the calcifications are "random". The Regional is when the calcification is close to each other forming a "circle". The Group is a small area with few calcifications. The Linear is when the calcification form "lines". The Segmental is similar to the Regional but more like an oval shape.
Data
MIMBCD: Scalable Interactions architecture envolving the basic architecture of the MIMBCD-UI project with the integration of the Orthanc Server and the ConerstoneJS image manipulation.
Data
Lesion Types - the first line is shown the type of shapes: round, oval, lobulated, irregular, architectural Distortion. The second line shows the type of margin of the lesion: circumscribed, obscure, microlobulated, indistinct, and spiculated.
Data
Bar chart - each bar represents the importance of each value had to the classification. On the left, the red bar is when the importance is a negative and blue bar is when the importance is positive and the line in the middle is 0. On the right, the bar represents the percentage that each value add to the AI classification.
Data
3D Module feature that receives breast cancer modalities such as CT, DTS and MRI and converts to a 3D module of the lesion. This feature will be developed in the MIMBCD-UI: Scalable Interactions project.
Data
Breast Types - The one on the left (Not Dense) has you can see, the breast has transparency compare to the one on the right (Dense) where most of the breast is white.
Data
Temporal View feature offers a visualization of a lesion through time by presenting previous annotations on top of the present lesion. By doing this it enables the detection of size variation in the lesion. This feature will be developed in the project MIMBCD-UI: Scalable Interactions.
Data
Recorded View feature gives the physician the possibility to store all actions previously executed, by doing this, the physician when changes to other modality doesn't lose any information that was already processed and it can view it again. This feature will be developed in the project MIMBCD-UI: Scalable Interactions.
Data
MIMBCD-UI: Scalable Interactions black box architecture that exists inside the MIMBCD-UI project.
Data
Coordinated View feature aims to reduce time when diagnosed, by automatically opening the opposite image in the opposite viewport available, at the same time adjust all images position to be more close to each other, and finally by allowing that one action in one image, has the same effect in all other images opened in other View-Ports. This featur...
Data
Overall architecture - The red box with ”XAI”, is what thesis is going to add to the overall project.
Data
Labels of the Lesions - the grey area is the lesion, the yellow represents the annotations taken by the physician, the green represents the shape volume and the blue. The image on the right is to explain how the coordinates are represented. The image on the left is how it would appear on the screen.
Technical Report
Full-text available
In this UTA, we aim to demographically assess the main characteristics and user profiles of the medical imaging community. Additionally, we will address the community acceptance to the AI topic so that we can understand the potential adoption of AI in the clinical workflow. As a demographic and domain study, this UTA is the 8th (UTA8) reporting gui...
Data
System setup as a cloud-based solution. Communications between the CornerstoneJS and Orthanc.
Poster
Full-text available
Artificial Intelligence has the potential to alter many application domains fundamentally. One prominent example is clinical radiology. The literature hypothesizes that Deep Learning algorithms will profoundly affect the clinical workflow. In this work, we utilized the unprecedented opportunity presented by developing Radiomics to investigate how a...
Data
Our BreastScreening assistant provides several features regarding the basics of Radiomics. From there, we will be able to validate our Machine Learning (ML) methods along with physicians.
Data
The viewer and annotation window. Images are displayed in the web viewer and the Radiologist records image annotations in using drawing tools and an annotation window.
Data
Image retrieval from the Orthanc Servers to the DICOM viewer. In this figure, the DICOM viewer is supported by both CornerstoneJS library and Cornerstone Prototype. Now, Radiologists can annotate each lesion providing this data to the Deep Learning Algorithms.
Data
This diagram shows that a given patient benefits from a set of medical imaging studies. Each study is made from a set of series. Each series is, in turn, a set of instances.
Data
Sample diagram of the DICOM meta tag structures essential for retrieving. The CornerstoneJS link each tag to show the right image on the viewer.
Data
The FeedBot project is an autonomous robot arm that feeds patients with cerebral palsy. The objective of the User Testing Guide is to develop a set of tests that, not only characterize the potential users of the system, but also, analyses the performance of the daily meal task made by the users, caregivers, and with the robot. This questionnaire a...
Poster
Full-text available
Artificial Intelligence has the potential to alter many application domains fundamentally. One prominent example is clinical radiology. The literature hypothesizes that Deep Learning algorithms will profoundly affect the clinical workflow. In this work, we utilized the unprecedented opportunity presented by developing Radiomics to investigate how a...
Data
Artificial Intelligence (AI) supporting the Medical Imaging (MI) current situation with decision making as a second opinion to Radiologists. Starting at the image acquisition from each patient to the phase of putting those images on a patient database. From there, Radiologists can examine each image and writing a final report. At this point, resear...
Raw Data
Full-text available
In our research, the DOTS Survey document is a simple item Trust scale. This document aims to measure the Trust on the Radiology Room (RR), into well-defined categories. It is intended to use this document on several User Tests where Researchers can retrieve information from Clinicians regarding the Trust in RR regarding novel systems. Repository:...
Data
The FeedBot project is an autonomous robot arm that feeds patients with cerebral palsy. The objective of the User Testing Guide is to develop a set of tests that, not only characterize the potential users of the system, but also, analyses the performance of the daily meal task made by the users, caregivers, and with the robot. This questionnaire a...
Data
The FeedBot project is an autonomous robot arm that feeds patients with cerebral palsy. The objective of the User Testing Guide is to develop a set of tests that, not only characterize the potential users of the system, but also, analyses the performance of the daily meal task made by the users, caregivers, and with the robot. This questionnaire a...
Technical Report
Full-text available
The FeedBot project is an autonomous robot arm that feeds patients with cerebral palsy. The objective of the User Testing Guide is to develop a set of tests that, not only characterize the potential users of the system, but also, analyses the performance of the daily meal task made by the users, caregivers, and with the robot. User Testing Guide D...
Technical Report
Full-text available
This document aims to describe the protocol and guidelines for the following information. We perform a set of tests in the scope of Multi-Modality, Assistant and Heatmap prototypes, respectively. The repositories are part of the MIDA [1] project using traditional devices (mouse and keyboard). The goal of the test is to compare each prototype, measu...
Presentation
Full-text available
Presentation of the IT-MEDEX Project [1] for the Closure Workshop [2], presenting the work done on the ISS'17 Conference [3] titled as "Towards Touch-Based Medical Image Diagnosis Annotation" [4]. The presentation was done at INESC-ID [5]. [1]: it-medex.inesc-id.pt [2]: it-medex.inesc-id.pt/workshop-closing-itmedex [3]: iss2017.acm.org [4]: dl....
Technical Report
Full-text available
The FeedBot project is an autonomous robot arm that feeds patients with cerebral palsy. The objective of the User Testing Guide is to develop a set of tests that, not only characterize the potential users of the system, but also, analyses the performance of the daily meal task made by the users, caregivers, and with the robot. User Testing Guide U...
Data
The FeedBot project is an autonomous robot arm that feeds patients with cerebral palsy. The objective of the User Testing Guide is to develop a set of tests that, not only characterize the potential users of the system, but also, analyses the performance of the daily meal task made by the users, caregivers, and with the robot. This questionnaire a...
Poster
Full-text available
We present an assistant for a fully automated breast cancer detection and segmentation from multi-modal medical images introducing clinical covariates. This assistant will be able to: 1. Collect a huge amount of ground truth (annotations), concerning two types of lesions (i.e., masses and calcifications) in all image modalities; 2. Provide the cl...
Poster
Full-text available
We present an assistant for a fully automated breast cancer detection and segmentation from multi-modal medical images introducing clinical covariates. This assistant will be able to: 1. Collect a huge amount of ground truth (annotations), concerning two types of lesions (i.e., masses and calcifications) in all image modalities; 2. Provide the cl...
Raw Data
Full-text available
In Breast Cancer Diagnosis, the BIRADS Survey document is a simple seven-item breast severity scale. This document aims to measure the severity and findings of the mammogram screening, into well-defined categories. It is intended to use this document on several User Tests where Researchers can retrieve information from Clinicians regarding the BIRA...
Data
In MIMBCD-UI project, the SUS Survey Template File document is a simple and reliable item for measuring usability. This document aims to measure the usability among the Radiology Room (RR), into well-defined categories. It is intended to use this document on several User Tests where Researchers can retrieve information from Clinicians regarding the...
Data
In Breast Cancer Diagnosis, the NASA-TLX Survey Template File document is a simple item Workload scale. This document aims to measure the Workload on the Radiology Room (RR), into well-defined categories. It is intended to use this document on several User Tests where Researchers can retrieve information from Clinicians regarding the Workload of th...
Thesis
Full-text available
Breast cancer is one of the most commonly occurring types of cancer among women. The primary strategy to reduce mortality is early detection and treatment based on medical imaging technologies. The current workflow applied in breast cancer diagnosis involves several imaging multimodalities. The fact that no single modality has high enough sensitivi...
Data
This work brought us the information about the most useful tools to our UI and what were the feature priorities. Surveys and quantitative measures are analyzed as conventional descriptive statistics regarding this information. Meaning is added from that outcomes to the qualitative findings, providing indicators of feature positioning and interactio...
Technical Report
Full-text available
This paper aims to use the previous work related to the DELPHI method, and, in particular, the Q-Sort method for information retrieval of a panel of experts, to provide a new and simple algorithm to generate Q-Sort matrices that adjust to the size of a given survey to have more questions whose weight is null for the outcome of the round, giving exp...

Questions

Questions (3)
Question
Artificial intelligence and machine learning are increasingly being applied to medical imaging, with the goal of improving diagnostic accuracy and efficiency. However, the adoption and use of these tools in practice can be influenced by clinicians' perceptions and attitudes toward them.
Therefore, we have three questions to cover these concerns:
1. Do you have any experience with artificial intelligence and machine learning in medical imaging?
2. How have your perceptions and attitudes towards these technologies affected your adoption and use of them in practice?
3. What factors do you think contribute to positive or negative perceptions and attitudes toward these technologies?
We invite researchers and practitioners with experience in this area to share their insights and experiences and to discuss the ways in which clinicians' perceptions and attitudes towards artificial intelligence and machine learning in medical imaging may influence their adoption and use of these tools.
Question
Different venues impose researchers for different kinds of expertise ratings. The expertise scale is a fourfold: (1) Expert; (2) Knowledgeable; (3) Passing Knowledge; and (4) No Knowledge. As researchers, we need to be as honest as possible about expertise. Therefore, it is of chief importance to understand where and when each expertise rating should be followed.
If someone is strong with both domain and methods, this person shall be an "Expert", for sure. But if this person is a Ph.D. student in a first or second year of studies? Still, this student has passed through several publications and revision processes. On the contrary, a Full Professor that has immense experience in the main major area. But has any on both domains and specific topics, for instance. Should this person be considered as "No Knowledge" for the purpose?
Question
Over time, Artificial Intelligence (AI) has shifted from simple algorithms, which rely on programming rules and agent logic, to Machine Learning (ML) solutions. On the one hand, algorithms contain few rules. On the other hand, the same algorithms ingest training data, from hospitals, to learn by trial and error. Characterizing ML behavior.
Machines could be able to integrate a large amount of patient data, but the problem is when and where. As well as, under which clinical circumstances, in a patient, two patients, or three patients or no patient at all. Clinicians are different in their mental state during patient diagnosis. How to program the clinical and workflow changes under different circumstances into AI? And how AI could learn from clinicians? How much power (i.e., patient data) should the computer need? [DOI: 10.13140/RG.2.2.11112.83207]

Network

Cited By