Jeroen Ooge

Jeroen Ooge
KU Leuven | ku leuven · Department of Computer Science

Master of Science
Looking for a research visit related to XAI and visual analytics

About

7
Publications
996
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
16
Citations
Citations since 2016
7 Research Items
16 Citations
20162017201820192020202120220246810
20162017201820192020202120220246810
20162017201820192020202120220246810
20162017201820192020202120220246810
Introduction
I am mainly working on human-centred explainable AI for laypeople: I study how visual explanations can be tailored to people's needs and how they affect concepts such as model understanding and appropriate trust. While education is my favourite application domain, I have also enjoyed conducting studies in healthcare and agrifood. Besides explainable AI, I am very interested in motivational techniques such as gamification.

Publications

Publications (7)
Conference Paper
Full-text available
Gamification researchers deem adolescents a particularly interesting audience for tailored gamification. However, empirical validation of popular player typologies and personality trait models thus far has been limited to adults. As adolescents exhibit complex behaviours that differ from older adults, these models may need adaptation. To that end,...
Article
Full-text available
To make predictions and explore large datasets, healthcare is increasingly applying advanced algorithms of artificial intelligence. However, to make well‐considered and trustworthy decisions, healthcare professionals require ways to gain insights in these algorithms' outputs. One approach is visual analytics, which integrates humans in decision‐mak...
Conference Paper
Full-text available
Artificial intelligence (AI) is becoming ubiquitous in the lives of both researchers and non-researchers, but AI models often lack transparency. To make well-informed and trustworthy decisions based on these models, people require explanations that indicate how to interpret the model outcomes. This paper presents our ongoing research in explainable...
Conference Paper
Full-text available
In the scope of explainable artificial intelligence, explanation techniques are heavily studied to increase trust in recommender systems. However, studies on explaining recommendations typically target adults in e-commerce or media contexts; e-learning has received less research attention. To address these limits, we investigated how explanations a...
Article
Full-text available
The rise of ‘big data’ in agrifood has increased the need for decision support systems that harvest the power of artificial intelligence. While many such systems have been proposed, their uptake is limited, for example because they often lack uncertainty representations and are rarely designed in a user-centred way. We present a prototypical visual...
Conference Paper
Full-text available
People's trust in prediction models can be affected by many factors, including domain expertise like knowledge about the application domain and experience with predictive modelling. However, to what extent and why domain expertise impacts people's trust is not entirely clear. In addition, accurately measuring people's trust remains challenging. We...
Preprint
BACKGROUND Type 2 Diabetes Mellitus (T2DM) is a common cause of mortality worldwide: each year, the chronic disease kills over one million people, making it the ninth leading cause of death. The growing use of smartphone applications (apps) led to focusing on mobile health, which is increasingly oriented towards self-care of T2DM. With smartphone a...

Network

Cited By