Lab
MedGIFT
Institution: HES-SO Valais-Wallis
About the lab
The MedGIFT project started at the medical faculty of the University of Geneva, Switzerland in 2002 and is since 2007 located in the Institute for Business Information Systems at the HES-SO in Sierre (Valais), Switzerland. The name stems originally from the use of the GNU Image Finding Tool (GIFT) for medical applications. Over the years the GIFT has been used less frequently and a large set of tools and applications have been developed to advance the field of medical visual information retrieval. All developed tools are open source and can be requested by email. Some very old tools might not be available anymore. A very strong collaboration with medical informatics and the University Hospitals and University of Geneva, Switzerland continues to keep the group activities in medical informat
Featured research (34)
BACKGROUND
Background: Risk of bias (RoB) assessment of randomized clinical trials (RCTs) is vital to answering systematic review questions accurately. Manual RoB assessment for hundreds of RCTs is a cognitively demanding and lengthy process. Automation has the potential to assist reviewers in rapidly identifying text descriptions in RCTs that indicate potential risks of bias. However, no RoB text span annotated corpus could be used to fine-tune or evaluate large language models (LLMs), and there are no established guidelines for annotating the RoB spans in RCTs.
OBJECTIVE
Objective: The revised Cochrane RoB Assessment 2 (RoB 2) tool provides comprehensive guidelines for RoB assessment; however, due to the inherent subjectivity of this tool, it cannot be directly used as RoB annotation guidelines. Our objective was to develop precise RoB text span annotation instructions that could address this subjectivity and thus aid the corpus annotation.
METHODS
Methods: We leveraged RoB 2 guidelines to develop visual instructional placards that serve as text annotation guidelines for RoB spans and risk judgments. Expert annotators employed these visual placards to annotate a dataset named RoBuster, consisting of 41 full-text RCTs from the domains of physiotherapy and rehabilitation. We report inter-annotator agreement (IAA) between two expert annotators for text span annotations before and after applying visual instructions on a subset (9 out of 41) of RoBuster. We also provide IAA on bias risk judgments using Cohen's Kappa. Moreover, we utilized a portion of RoBuster (10 out of 41) to evaluate an LLM using a straightforward evaluation framework. This evaluation aimed to gauge the performance of LLM (here GPT 3.5) in the challenging task of RoB span extraction and demonstrate the utility of this corpus using a straightforward evaluation framework.
RESULTS
Results: We present a corpus of 41 RCTs with fine-grained text span annotations comprising more than 28,427 tokens belonging to 22 RoB classes. The IAA at the text span level calculated using the F1 measure varies from 0% to 90%, while Cohen's kappa for risk judgments ranges between -0.235 and 1.0. Employing visual instructions for annotation increases the IAA by more than 17 percent points. LLM (GPT-3.5) shows promising but varied observed agreements with the expert annotation across the different bias questions.
CONCLUSIONS
Conclusions: Despite having comprehensive bias assessment guidelines and visual instructional placards, RoB annotation remains a complex task. Utilizing visual placards for bias assessment and annotation enhances IAA compared to cases where visual placards are absent; however, text annotation remains challenging for the subjective questions and the questions for which annotation data is unavailable in RCTs. Similarly, while GPT-3.5 demonstrates effectiveness, its accuracy diminishes with more subjective RoB questions and low information availability.
All data is available at https://zenodo.org/records/8363126
Decades of research into hand-object interaction and manipulation skills has yielded fundamental insights with applications in robotics and motor learning. Nevertheless, integrating visual function (especially binocular function, important to perceive depth) into this equation is crucial, forming a triangle between vision, reaching, and object manipulation.
The ReGraD dataset provides kinematic data during hand-object interaction in monocular and binocular conditions at different depths and monocular/binocular conditions. It comprises two sub-datasets: ReGraD A (two measurements) can determine its test-retest reliability, whilst ReGraD B (one measurement) can characterize individuals with and without visual disorders. ReGraD includes 35 controls and 3 patients with amblyopia aged 6 to 35.
The ReGraD dataset may aid to (1) gain insights into hand-object interaction under various eye conditions and depths, (2) assess reliability and reproducibility and (3) examine the effects of groups (control vs. patients) and age, among others. The ReGraD dataset contains raw data that can also be used to develop algorithms for data segmentation and data interpolation in the kinematic field.
Members (16)
Pierre-Alexandre Poletti
Anastasia Rozhyna
arthur chevalley