Fares Abawi

Fares Abawi
University of Hamburg | UHH · Department of Informatics

Research Associate - University of Hamburg

About

15
Publications
1,873
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
37
Citations

Publications

Publications (15)
Conference Paper
A prototype of a low-cost, wearable system is presented to assist persons with hearing impairments by detecting in-house alert sounds and estimating their direction of arrival (DOA). The prototype is composed of a circular microphone array, a small microcomputer, and motors for vibration alerting. The array includes five microphones uniformly distr...
Article
Full-text available
Learning hierarchical abstractions from sequences is a challenging and open problem for Recurrent Neural Networks (RNNs). This is mainly due to the difficulty of detecting features that span over long time distances with also different frequencies. In this paper, we address this challenge by introducing surprisal-based activation, a novel method t...
Conference Paper
Full-text available
In this paper, we present an autonomous AI system designed for a Human-Robot Interaction (HRI) study, set around a dice game scenario. We conduct a case study to answer our research question: Does a robot with a socially engaged personality lead to a higher acceptance than a competitive personality? The flexibility of our proposed system allows us...
Article
Full-text available
We present a follow-up study on our unified visuomotor neural model for the robotic tasks of identifying, localizing, and grasping a target object in a scene with multiple objects. Our Retinanet-based model enables end-to-end training of visuomotor abilities in a biologically inspired developmental approach. In our initial implementation, a neural...
Conference Paper
Full-text available
Saliency prediction refers to the computational task of modeling overt attention. Social cues greatly influence our attention, consequently altering our eye movements and behavior. To emphasize the efficacy of such features, we present a neural model for integrating social cues and weighting their influences. Our model consists of two stages. Durin...
Preprint
Full-text available
Human eye gaze plays an important role in delivering information, communicating intent, and understanding others' mental states. Previous research shows that a robot's gaze can also affect humans' decision-making and strategy during an interaction. However, limited studies have trained humanoid robots on gaze-based data in human-robot interaction s...
Preprint
Saliency prediction refers to the computational task of modeling overt attention. Social cues greatly influence our attention, consequently altering our eye movements and behavior. To emphasize the efficacy of such features, we present a neural model for integrating social cues and weighting their influences. Our model consists of two stages. Durin...
Preprint
Full-text available
Due to the COVID-19 pandemic, robots could be seen as potential resources in tasks like helping people work remotely, sustaining social distancing, and improving mental or physical health. To enhance human-robot interaction, it is essential for robots to become more socialised, via processing multiple social cues in a complex real-world environment...
Chapter
Continual or lifelong learning has been a long-standing challenge in machine learning to date, especially in natural language processing (NLP). Although state-of-the-art language models such as BERT have ushered in a new era in this field due to their outstanding performance in multitask learning scenarios, they suffer from forgetting when being ex...
Preprint
Full-text available
Continual or lifelong learning has been a long-standing challenge in machine learning to date, especially in natural language processing (NLP). Although state-of-the-art language models such as BERT have ushered in a new era in this field due to their outstanding performance in multitask learning scenarios, they suffer from forgetting when being ex...
Preprint
Full-text available
We present a follow-up study on our unified visuomotor neural model for the robotic tasks of identifying, localizing, and grasping a target object in a scene with multiple objects. Our Retinanet-based model enables end-to-end training of visuomotor abilities in a biologically inspired developmental approach. In our initial implementation, a neural...
Conference Paper
Full-text available
Abstract—We present a unified visuomotor neural architecture for the robotic task of identifying, localizing, and grasping a goal object in a cluttered scene. The RetinaNet-based neural architecture enables end-to-end training of visuomotor abilities in a biological-inspired developmental approach. We demonstrate a successful development and evalua...
Conference Paper
Full-text available
Learning hierarchical abstractions from sequences is a challenging and open problem for Recurrent Neural Networks (RNNs). This is mainly due to the difficulty of detecting features that span over long distances with also different frequencies. In this paper, we address this challenge by introducing surprisal-based activation, a novel method to pres...
Conference Paper
Full-text available
In this paper, we tackle sequence-to-tree transduction for language processing with neural networks implementing several subtasks, namely tokenization, semantic annotation, and tree generation. Our research question is how the individual subtasks influence the overall end-to-end learning performance in case of a convolutional network with dilated p...
Article
Full-text available
A practical mathematical programming based approach is introduced for solving the examination timetabling problem at the German Jordanian University (GJU), whereby the complex process of acquiring a feasible examination timetable is simplified by subdividing it into three smaller sub-problems (phases). Accordingly, the exams are initially allocated...

Network

Cited By