Zeynep Yucel’s research while affiliated with Centrum Wiskunde & Informatica and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (6)


Fig. 1. 
Fig. 2. 
Fig. 4. 
Fig. 6. 
Joint visual attention modeling for naturally interacting robotic agents
  • Conference Paper
  • Full-text available

October 2009

·

79 Reads

·

24 Citations

Z. Yucel

·

·

C. Merigli

·

This paper elaborates on mechanisms for establishing visual joint attention for the design of robotic agents that learn through natural interfaces, following a developmental trajectory not unlike infants. We describe first the evolution of cognitive skills in infants and then the adaptation of cognitive development patterns in robotic design. A comprehensive outlook for cognitively inspired robotic design schemes pertaining to joint attention is presented for the last decade, with particular emphasis on practical implementation issues. A novel cognitively inspired joint attention fixation mechanism is defined for robotic agents.

Download

Resolution of Focus of Attention Using Gaze Direction Estimation and Saliency Computation

October 2009

·

12 Reads

·

10 Citations

Modeling the user's attention is useful for responsive and interactive systems. This paper proposes a method for establishing joint visual attention between an experimenter and an intelligent agent. A rapid procedure is described to track the 3D head pose of the experimenter, which is used to approximate the gaze direction. The head is modeled with a sparse grid of points sampled from the surface of a cylinder. We then propose to employ a bottom-up saliency model to single out interesting objects in the neighborhood of the estimated focus of attention. We report results on a series of experiments, where a human experimenter looks at objects placed at different locations of the visual field, and the proposed algorithm is used to locate target objects automatically. Our results indicate that the proposed approach achieves high localization accuracy and thus constitutes a useful tool for the construction of natural human-computer interfaces.


Automated discrimination of psychotropic drugs in mice via computer vision-based analysis

July 2009

·

49 Reads

·

4 Citations

Journal of Neuroscience Methods

Zeynep Yucel

·

·

·

[...]

·

A Bulent Ozguler

We developed an inexpensive computer vision-based method utilizing an algorithm which differentiates drug-induced behavioral alterations. The mice were observed in an open-field arena and their activity was recorded for 100 min. For each animal the first 50 min of observation were regarded as the drug-free period. Each animal was exposed to only one drug and they were injected (i.p.) with either amphetamine or cocaine as the stimulant drugs or morphine or diazepam as the inhibitory agents. The software divided the arena into virtual grids and calculated the number of visits (sojourn counts) to the grids and instantaneous speeds within these grids by analyzing video data. These spatial distributions of sojourn counts and instantaneous speeds were used to construct feature vectors which were fed to the classifier algorithms for the final step of matching the animals and the drugs. The software decided which of the animals were drug-treated at a rate of 96%. The algorithm achieved 92% accuracy in sorting the data according to the increased or decreased activity and then determined which drug was delivered. The method differentiated the type of psychostimulant or inhibitory drugs with a success ratio of 70% and 80%, respectively. This method provides a new way to automatically evaluate and classify drug-induced behaviors in mice.


Figure 1. Examples of eye regions sampled by pose (yellow dot meshes)  
Table 1 . Effect of pose cues in eye localization
Table 2 . Comparison of RMSE and STD
Figure 3. A mistake of the standard eye locator (.), corrected by the pose cues (x) according to the reference points (+)
Figure 4. A schematic diagram of the components of the system  
Robustifying eye center localization by head pose cues

June 2009

·

101 Reads

·

56 Citations

Proceedings / CVPR, IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE Computer Society Conference on Computer Vision and Pattern Recognition

Head pose and eye location estimation are two closely related issues which refer to similar application areas. In recent years, these problems have been studied individually in numerous works in the literature. Previous research shows that cylindrical head models and isophote based schemes provide satisfactory precision in head pose and eye location estimation, respectively. However, the eye locator is not adequate to accurately locate eye in the presence of extreme head poses. Therefore, head pose cues may be suited to enhance the accuracy of eye localization in the presence of severe head poses. In this paper, a hybrid scheme is proposed in which the transformation matrix obtained from the head pose is used to normalize the eye regions and, in turn the transformation matrix generated by the found eye location is used to correct the pose estimation procedure. The scheme is designed to (1) enhance the accuracy of eye location estimations in low resolution videos, (2) to extend the operating range of the eye locator and (3) to improve the accuracy and re-initialization capabilities of the pose tracker. From the experimental results it can be derived that the proposed unified scheme improves the accuracy of eye estimations by 16% to 23%. Further, it considerably extends its operating range by more than 15deg, by overcoming the problems introduced by extreme head poses. Finally, the accuracy of the head pose tracker is improved by 12% to 24%.


Cylindrical model based head pose estimation for drivers

April 2009

·

26 Reads

·

1 Citation

The application of action recognition algorithms onto driving safety systems is still an open area of research. In terms of driving safety, identification of head movements present more significant information in comparison to other actions of the driver. Therefore, in this study, we developed a cylindrical model based head pose estimator to track drivers' head movements. The experiments indicate that the proposed scheme presents significant accuracy in estimation of head pose.


Citations (5)


... Later, relative positions of facial landmarks were applied. A wide variety of ways were proposed to extract and represent facial features [12][13][14][15][16][17]. Using landmarks is computationally efficient due to simple feature representations, and the detected landmarks are robust to transformation. ...

Reference:

A Deformation Field-Based Approach for Expression Processing in Head Motion Correction
Robustifying eye center localization by head pose cues

Proceedings / CVPR, IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE Computer Society Conference on Computer Vision and Pattern Recognition

... In recent years, sophisticated algorithms have been developed in which gaze direction guides the identification of objects of interest [235,236,237,238,239]. For instance, in the context of robot learning, Yucel et al. [237] use a joint attention algorithm that works as follows: the instructor's face is first detected using Haar-like features. ...

Joint visual attention modeling for naturally interacting robotic agents

... For the diagnosis of epileptic seizures in EEG recordings, classification experiments using ANN employing wavelet transform are carried out [7]. The diagnosis of epilepsy was provided in the works of Yücel and Özgüler by using the modeling of complicated measurements of EEG signals with varying resolutions [8]. The applicability of timefrequency analysis for classifying epileptic episodes in EEG data segments is proved, and several approaches are evaluated [9]. ...

Detection of epilepsy seizures and epileptic indicators in EEG signals
  • Citing Conference Paper
  • May 2008

... When looking in more detail at the confusion matrices in Fig. 5, it appears that the confusion between classes is mostly between neighboring classes, i.e., neighboring emotions more likely to resemble each other. Therefore, plotting the Cumulative Matching Characteristic (CMC) curves is a more appropriate choice to present the performance of the system as a function of the distance between classes, similar to the approach adopted in [45]. We define the distances between classes as follows: (1) the distance between two classes that (2) the distances between two classes of different quadrants is defined as d q + 1, where d q corresponds to the number of quadrants encountered when departing from one class in order to reach the other class. ...

Resolution of Focus of Attention Using Gaze Direction Estimation and Saliency Computation
  • Citing Conference Paper
  • October 2009

... Animal tracking and recording was performed using an in-house developed tracking software (Evranos-Aksoz et al., 2017;Yucel, Sara, Duygulu, Onur, Esen & Ozguler, 2009). ...

Automated discrimination of psychotropic drugs in mice via computer vision-based analysis
  • Citing Article
  • July 2009

Journal of Neuroscience Methods