Figure 10 - uploaded by Mahdi Rezaei
Content may be subject to copyright.
The structure of the Viola–Jones cascade classifier.  

The structure of the Viola–Jones cascade classifier.  

Source publication
Article
Full-text available
The design of intelligent driver assistance systems is of increasing importance for the vehicle-producing industry and road-safety solutions. This article starts with a review of road-situation monitoring and driver's behaviour analysis. This article also discusses lane tracking using vision (or other) sensors, and the strength or weakness of diffe...

Citations

... Since the seminal studies by Beatty and Kahneman (1966), several authors have shown that pupil dilation may be related to cognitive processing and to the mental effort required to perform a given task (Othman and Romli, 2016;Kosch et al., 2018;van der Wel and van Steenbergen, 2018). This relation has been analyzed in various tasks including short-term memory (Peavler, 1974) and visual search tasks (Porter et al., 2007), but also in air traffic control (Hilburn et al., 1997) and driving (Rezaei and Klette, 2011). Just and Carpenter (1993) identified changes in pupil diameter when understanding single sentences with different degrees of difficulty. ...
Article
Full-text available
Ocular activity is known to be sensitive to variations in mental workload, and recent studies have successfully related the distribution of eye fixations to the mental load. This study aimed to verify the effectiveness of the spatial distribution of fixations as a measure of mental workload and its sensitivity to different types of demands imposed by the task: mental, temporal, and physical. To test the research hypothesis, two experimental studies were run: Experiment 1 evaluated the sensitivity of an index of spatial distribution (Nearest Neighbor Index; NNI) to changes in workload. A sample of 30 participants participated in a within-subject design with different types of task demands (mental, temporal, physical) applied to Tetris game; Experiment 2 investigated the accuracy of the index through the analysis of 1-min epochs during the execution of a visual-spatial task (the “spot the differences” puzzle game). Additionally, NNI was compared to a better-known ocular mental workload index, the entropy rate. The data analysis showed a relation between the NNI and the different workload levels imposed by the tasks. In particular: Experiment 1 demonstrated that increased difficulty, due to higher temporal demand, led to a more dispersed pattern with respect to the baseline, whereas the mental demand led to a more grouped pattern of fixations with respect to the baseline; Experiment 2 indicated that the entropy rate and the NNI show a similar pattern over time, indicating high mental workload after the first minute of activity. That suggests that NNI highlights the greater presence of fixation groups and, accordingly, the entropy indicates a more regular and orderly scanpath. Both indices are sensitive to changes in workload and they seem to anticipate the drop in performance. However, the entropy rate is limited by the use of the areas of interest, making it impossible to apply it in dynamic contexts. Conversely, NNI works with the entire scanpath and it shows sensitivity to different types of task demands. These results confirm the NNI as a measure applicable to different contexts and its potential use as a trigger in adaptive systems implemented in high-risk settings, such as control rooms and transportation systems.
... In 2005, the authors themselves applied their features to detect pedestrians based on their motion and appearance [28]. Other than face and pedestrian detection, the original framework has also been utilized in many studies to detect various kinds of object and state, including animal face [3], vehicle [14], driver behavior [23], smoke image [35], etc. Soo Woong Kim et al. proposed an interesting idea about using the features to detect obstacles and guide vision-based robot for home cleaning tasks [15]. ...
Article
Full-text available
Traditional iris recognition methods, which are still preferred against artificial intelligence (AI) approaches in practical applications, are often required to capture high-grade iris samples by an iris scanner for accurate subsequent processing. To reduce the system cost for mass deployment of iris recognition, pricey scan devices can be replaced by the average quality cameras combined with additional processing algorithm. In this paper, we propose a Haar-like-feature-based iris localization method to quickly detect the location of human iris in the images captured by low-cost cameras for the ease of post-processing stages. The AdaBoost algorithm was chosen as a learning method for training a cascade classifier using Haar-like features, which was then utilized to detect the iris position. The experimental results have shown acceptable accuracy and processing speed for this novel cascade classifier. This achievement stimulates us to implement this novel capturing device in our iris recognition.
... Deep learning based methodologies are widely used in almost all aspects of our life, from surveillance systems to autonomous vehicles [23], Robotics, and even the recent global challenge due to the COVID-19 pandemic [35]. ...
Preprint
Full-text available
CAPTCHA is a human-centred test to distinguish a human operator from bots, attacking programs, or any other computerised agent that tries to imitate human intelligence. In this research, we investigate a way to crack visual CAPTCHA tests by an automated deep learning based solution. The goal of the cracking is to investigate the weaknesses and vulnerabilities of the CAPTCHA generators and to develop more robust CAPTCHAs, without taking the risks of manual try and error efforts. We have developed a Convolutional Neural Network called Deep-CAPTCHA to achieve this goal. We propose a platform to investigate both numerical and alphanumerical image CAPTCHAs. To train and develop an efficient model, we have generated 500,000 CAPTCHAs using Python Image-Captcha Library. In this paper, we present our customised deep neural network model, the research gaps and the existing challenges, and the solutions to overcome the issues. Our network's cracking accuracy results leads to 98.94% and 98.31% for the numerical and the alpha-numerical Test datasets, respectively. That means more works need to be done to develop robust CAPTCHAs, to be non-crackable against bot attaches and artificial agents. As the outcome of this research, we identify some efficient techniques to improve the CAPTCHA generators, based on the performance analysis conducted on the Deep-CAPTCHA model.
... contact to steering wheel or physiological information such as heart rate, muscle activity, etc.) and cameras (e.g. tracking of eye blinking and head motion) can deduce the level of tiredness and distraction so that the automated system is able to evaluate the human driver's capacity to take over driving responsibility (Kircher and Ahlstrom 2017;Rezaei and Klette 2011). ...
Article
Full-text available
Road traffic accidents are largely driven by human error; therefore, the development of connected automated vehicles (CAV) is expected to significantly reduce accident risk. However, these changes are by no means proven and linear as different levels of automation show risk-related idiosyncrasies. A lack of empirical data aggravates the transparent evaluation of risk arising from CAVs with higher levels of automation capability. Nevertheless, it is likely that the risks associated with CAV will profoundly reshape the risk profile of the global motor insurance industry. This paper conducts a deep qualitative analysis of the impact of progressive vehicle automation and interconnectedness on the risks covered under motor third-party and comprehensive insurance policies. This analysis is enhanced by an assessment of potential emerging risks such as the risk of cyber-attacks. We find that, in particular, primary insurers focusing on private retail motor insurance face significant strategic risks to their business model. The results of this analysis are not only relevant for insurance but also from a regulatory perspective as we find a symbiotic relationship between an insurance-related assessment and a comprehensive evaluation of CAV’s inherent societal costs.
... For an actual course of driving we may expect different situations (e.g. driving straight ahead, turning, overtaking or avoiding lead vehicles, obstacles, or pedestrians) where each case can define different kinds of challenges [207]. ...
Book
This book summarises the state of the art in computer vision-based driver and road monitoring, focussing on monocular vision technology in particular, with the aim to address challenges of driver assistance and autonomous driving systems. While the systems designed for the assistance of drivers of on-road vehicles are currently converging to the design of autonomous vehicles, the research presented here focuses on scenarios where a driver is still assumed to pay attention to the traffic while operating a partially automated vehicle. Proposing various computer vision algorithms, techniques and methodologies, the authors also provide a general review of computer vision technologies that are relevant for driver assistance and fully autonomous vehicles. Computer Vision for Driver Assistance is the first book of its kind and will appeal to undergraduate and graduate students, researchers, engineers and those generally interested in computer vision-related topics in modern vehicle design.
... For an actual course of driving we may expect different situations (e.g. driving straight ahead, turning, overtaking or avoiding lead vehicles, obstacles, or pedestrians) where each case can define different kinds of challenges [207]. ...
Chapter
In this chapter we present and discuss the basic computer vision concepts, techniques, and mathematical background that we use in this book. The chapter introduces image notations, the concept of integral images, colour space conversions, the Hough transform for line detection, camera coordinate systems, and stereo computer vision.
... For an actual course of driving we may expect different situations (e.g. driving straight ahead, turning, overtaking or avoiding lead vehicles, obstacles, or pedestrians) where each case can define different kinds of challenges [207]. ...
Chapter
In this chapter we propose a method to assess driver drowsiness based on face and eye-status analysis. The chapter starts with a detailed discussion on effective ways to create a strong classifier (the “training phase”), and it continues with a novel optimization method for the “application phase” of the classifier. Both together significantly improve the performance of our Haar-like based detectors in terms of speed, detection rate, and detection accuracy under non-ideal lighting conditions and for noisy images. The proposed framework includes a preprocessing denoising method, introduction of Global Haar-like features, a fast adaptation method to cope with rapid lighting variations, as well as an implementation of a Kalman filter tracker to reduce the search region and to indirectly support our eye-state monitoring system. Experimental results obtained for the MIT-CMU dataset, Yale dataset, and our recorded videos and comparisons with standard Haar-like detectors show noticeable improvements compared to previous methods.
... For an actual course of driving we may expect different situations (e.g. driving straight ahead, turning, overtaking or avoiding lead vehicles, obstacles, or pedestrians) where each case can define different kinds of challenges [207]. ...
Chapter
In this chapter we discuss how to assess the risk level in a given driving scenario based on the eight possible inputs: driver’s direction of attention (yaw, roll, pitch), signs of fatigue or drowsiness (yawning, head nodding, eye closure), and from road situations (distance, and the angle of the detected vehicles to the ego-vehicle). Using a fuzzy-logic inference system, we develop an integrated solution to fuse, to interpret, and to process all of the above information. The ultimate goal is to prevent a traffic accident by fusing all the existing “in-out” data from inside the car cockpit and outside on the road. We aim to warn the driver in case of high-risk driving conditions and to prevent an imminent crash.
... For an actual course of driving we may expect different situations (e.g. driving straight ahead, turning, overtaking or avoiding lead vehicles, obstacles, or pedestrians) where each case can define different kinds of challenges [207]. ...
Chapter
In this chapter we outline object detection and object recognition techniques which are of relevance for the remainder of the book. We focus on supervised and unsupervised learning approaches. The chapter provides technical details for each method, discussions on the strengths and weaknesses of each method, and gives examples and various applications for each method. Material is provided to support a decision for an appropriate object detection technique for computer vision applications, including driver-assistance systems.
... For an actual course of driving we may expect different situations (e.g. driving straight ahead, turning, overtaking or avoiding lead vehicles, obstacles, or pedestrians) where each case can define different kinds of challenges [207]. ...
Chapter
“Collision warning systems” are actively researched in the area of computer vision and the automotive industry. Using monocular vision only, this chapter discusses the part of our study that aims at detecting and tracking the vehicles ahead, to identify safety distances, and to provide timely information to assist a distracted driver under various weather and lighting conditions. As part of the work presented in this chapter, we also adopt the previously discussed dynamic global Haar (DGHaar) features for vehicle detection. We introduce “taillight segmentation” and a “virtual symmetry detection” technique for pairing the rear-light contours of the vehicles on the road. Applying a heuristic geometric solution, we also develop a method for inter-vehicle “distance estimation” using only a monocular vision sensor. Inspired by Dempster–Shafer theory, we finally fuse all the available clues and information to reach a higher degree of certainty. The proposed algorithm is able to detect vehicles ahead both at day and night, and also for a wide range of distances. Experimental results under various conditions, including sunny, rainy, foggy, or snowy weather, show that the proposed algorithm outperforms other currently published algorithms that are selected for comparison.