Sule Tekkesinoglu

Sule Tekkesinoglu
  • Doctor of Philosophy
  • PostDoc at University of Oxford

About

23
Publications
15,168
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
590
Citations
Introduction
I am a postdoctoral researcher at the Oxford Robotics Institute (ORI), where I research explainable and trustworthy autonomous driving. I received my PhD from Umeå University, Sweden, in September 2022. During my PhD, I worked on generating and presenting intelligible explanations for predictions and decisions made by black-box algorithms. My research interests lie in machine learning, explainable artificial intelligence, and human-machine collaboration.
Current institution
University of Oxford
Current position
  • PostDoc
Education
June 2012 - July 2014
University of Technology Malaysia
Field of study
  • Computer Science

Publications

Publications (23)
Article
Full-text available
Automation of rubber tree clone classification has inspired research into new methods of leaf feature extraction. In current practice, rubber clone inspectors has been using several leaf features to identify clone types. One of the unique features of rubber tree leaf is palmate leaflets. This characteristic generates different leaflet positions, wh...
Conference Paper
Full-text available
Humans are increasingly relying on complex systems that heavily adopts Artificial Intelligence (AI) techniques. Such systems are employed in a growing number of domains, and making them explainable is an impelling priority. Recently, the domain of eX-plainable Artificial Intelligence (XAI) emerged with the aims of fostering transparency and trustwo...
Conference Paper
Full-text available
The significant advances in autonomous systems together with an immensely wider application domain have increased the need for trustable intelligent systems. Explainable artificial intelligence is gaining considerable attention among researchers and developers to address this requirement. Although there is an increasing number of works on interpret...
Conference Paper
Full-text available
In this paper, we present the Py-CIU library, a generic Python tool for applying the Contextual Importance and Utility (CIU) explainable machine learning method. CIU uses concepts from decision theory to explain a machine learning model's prediction specific to a given data point by investigating the importance and usefulness of individual features...
Article
Full-text available
Given the uncertainty surrounding how existing explainability methods for autonomous vehicles (AVs) meet the diverse needs of stakeholders, a thorough investigation is imperative to determine the contexts requiring explanations and suitable interaction strategies. A comprehensive review becomes crucial to assess the alignment of current approaches...
Preprint
Full-text available
As machine learning becomes increasingly integral to autonomous decision-making processes involving human interaction, the necessity of comprehending the model's outputs through conversational means increases. Most recently, foundation models are being explored for their potential as post hoc explainers, providing a pathway to elucidate the decisio...
Preprint
Full-text available
Given the uncertainty surrounding how existing explainability methods for autonomous vehicles (AVs) meet the diverse needs of stakeholders, a thorough investigation is imperative to determine the contexts requiring explanations and suitable interaction strategies. A comprehensive review becomes crucial to assess the alignment of current approaches...
Article
Full-text available
Introduction Graph-based representations are becoming more common in the medical domain, where each node defines a patient, and the edges signify associations between patients, relating individuals with disease and symptoms in a node classification task. In this study, a Graph Convolutional Networks (GCN) model was utilized to capture differences i...
Preprint
Full-text available
Commentary driving is a technique in which drivers verbalise their observations, assessments and intentions. By speaking out their thoughts, both learning and expert drivers are able to create a better understanding and awareness of their surroundings. In the intelligent vehicle context, automated driving commentary can provide intelligible explana...
Article
Full-text available
With the increased use of machine learning in decision-making scenarios, there has been a growing interest in explaining and understanding the outcomes of machine learning models. Despite this growing interest, existing works on interpretability and explanations have been mostly intended for expert users. Explanations for general users have been ne...
Chapter
Full-text available
Autonomous agents and robots with vision capabilities powered by machine learning algorithms such as Deep Neural Networks (DNNs) are taking place in many industrial environments. While DNNs have improved the accuracy in many prediction tasks, it is shown that even modest disturbances in their input produce erroneous results. Such errors have to be...
Preprint
Full-text available
The significant advances in autonomous systems together with an immensely wider application domain have increased the need for trustable intelligent systems. Explainable artificial intelligence is gaining considerable attention among researchers and developers to address this requirement. Although there is an increasing number of works on interpret...
Article
Full-text available
A frequently mentioned benefit of gesture-based input to computing systems is that it provides naturalness in interaction. However, it is not uncommon to find gesture sets consisting of arbitrary (hand) formations with illogically-mapped functions. This defeat the purpose of using gestures as a means to facilitate natural interaction. The root of t...
Chapter
Full-text available
The significant advances in autonomous systems together with an immensely wider application domain have increased the need for trustable intelligent systems. Explainable artificial intelligence is gaining considerable attention among researchers and developers to address this requirement. Although there is an increasing number of works on interpret...
Article
Full-text available
Hevea (Rubber tree) leaf characteristics have not been utilized yet for automation systems to classify different clones. However rubber tree leaves have some features that can be used to differentiate the clones. The rubber tree leaf is in the class of palmate leaves which means three leaflets are joined at one mutual base. This unique feature give...
Article
Full-text available
The goal of this study is to present a concept to identify overlapping rubber tree (Hevea brasiliensis-scientific name) leaf boundaries. Basically rubber tree leaves show similarity to each other and they may contain similar information such as color, texture or shape of leaves. In fact rubber tree leaves are naturally in class of palmate leaves, i...
Article
Full-text available
The goal of this work is to present a concept for web based Augmented Reality. We have many examples of Augmented Reality systems in different field from military applications to medical applications, from entertainment to manufacturing. In this paper we worked on how virtual environments can be combined with web based applications. Internet users...

Network

Cited By