Fig 1 - uploaded by Ola Younis
Content may be subject to copyright.
Source publication
Peripheral vision loss is the lack of ability to recognise objects and shapes in the outer area of the visual field. This condition can affect people's daily activities and reduces their quality of life. In this work, a smart technology that implements computer vision algorithms in real-time to detect and track moving hazards around people with per...
Context in source publication
Context 1
... vision comprises around 13 degrees. The second type is the peripheral vision used to detect larger contrasts, colours and motion and extends up to 60 degrees nasally, 107 degrees temporally, 70 degrees down and 80 degrees up for each eye [3]. The human visual field of view for both eyes showing different types of peripheral vision is shown in Fig. 1. It is important to mention that human beings don't see in full resolution. Instead, we see fine details using the central vision only, whereas in the peripheral vision we see only significant contrasts, colours and recognise ...
Similar publications
Visual features extracted by retinal circuits are streamed into higher visual areas (HVAs) after being processed along the visual hierarchy. However, how specialized neuronal representations of HVAs are built, based on retinal output channels, remained unclear. Here, we addressed this question by determining the effects of genetically disrupting re...
Movement patterns in preterm infants can offer crucial insights into their physiological state including maturational development and sleep. These patterns can also serve as early indicators of potential deteriorations, such as cerebral palsy, sepsis, and epilepsy. In this study, we investigated a novel two-dimensional optical fiber mat system for...
The Eye and the Brain vs. the Automated Auto: A Comparison Objectives: How do the eye and brain enable human drivers to see? How does the human process compare with an automated driving system, also termed an automated vehicle or AV? Methods: Briefly summarize the anatomy, physiology, and neural processing of the eye and brain that enables resoluti...
Motion detection is vital for consumer electronics and the Internet of things (IOT). For a scenario where the motion is slow and gentle, the resolution of the motion sensor is critical for the detection, while the algorithm development is another critical issue to differentiate the motion signal from noise measurement. This paper investigates the f...
Monitoring the occupancy of public sports facilities is essential to assess their use and to motivate their construction in new places. In the case of a football field, the area to cover is large, thus several regular cameras should be used, which makes the setup expensive and complex. As an alternative , we developed a system that detects players...
Citations
... Once the model has detected if the user is indoor or outdoor, precise localization capabilities further enhance smart eyewear's ability to provide context-aware services, particularly in navigation and mobility assistance. Smart eyewear has been increasingly utilized for localization and navigation, playing a pivotal role in aiding visually impaired individuals by improving their mobility and independence [135]. To enhance localization accuracy while minimizing power consumption, researchers have proposed low-power multiantenna smart eyewear systems that provide users with directional guidance, enabling intuitive navigation assistance without excessive energy expenditure [136]. ...
Edge devices have garnered significant attention for their ability to process data locally, providing low-latency, context-aware services without the need for extensive reliance on cloud computing. This capability is particularly crucial in context recognition, which enables dynamic adaptation to a user’s real-time environment. Applications range from health monitoring and augmented reality to smart assistance and social interaction analysis. Among edge devices, smart eyewear has emerged as a promising platform for context recognition due to its ability to unobtrusively capture rich, multi-modal sensor data. However, the deployment of context-aware systems on such devices presents unique challenges, including real-time processing, energy efficiency, sensor fusion, and noise management. This manuscript provides a comprehensive survey of context recognition in edge devices, with a specific emphasis on smart eyewear. It reviews the state-of-the-art sensors and applications for context inference. Furthermore, the paper discusses key challenges in achieving reliable, low-latency context recognition while addressing energy and computational constraints. By synthesizing advancements and identifying gaps, this work aims to guide the development of more robust and efficient solutions for context recognition in edge computing.
... Monitoring data, controlling actuators, controlling, and interacting with robots, and monitoring structures divide the experimenter's attention. Humans receive between 80-90% of information through vision and the amount of information humans can receive, and process is limited by their mental capacity [18], therefore AR helps reduce the cognitive load. AR has been applied to robot teleoperation to reduce gaze distraction by augmenting live video feed from the robot [19]. ...
... Uma comparação entre tecnologias que utilizam visão computacional e aquelas que empregam BLE (Bluetooth Low Energy) para auxílio na locomoção em ambientes internos é explorada em [7]. No estudo de [15], os autores propõem um sistema utilizando óculos inteligentes e realidade aumentada para detectar objetos em movimento que podem representar perigos nas áreas de perda de visão periférica. ...
In this work, a prototype for a low-cost application was developedaiming to assist vision impaired individuals. The prototype is awearable device, equipped with a Raspberry Pi for image processingof the frames captured from a camera strapped to the user’schest. The feedback is provided by means of vibration motors, attachedto an armband, allowing to distinguish obstacles accordingto its proximity. The obtained preliminary results demonstrated thefeasibility of the proposed approach, collaborating with a bettermobility process with low-cost setups to assist the detection ofobstacles.
... The human is first identified as the target model using the deep learning technique. The optimization technique then tracks the person by computing the distance between the target model's color histogram and the subsequent color histogram shape of the selected position [13,14]. A color histogram has many benefits, including quick computing, partial occlusion resistance, non-rigid object tracking, scale invariance, and rotation [15]. ...
... Eqn. (14) is used to determine the F-score. (14) Where, r P is denoted as precision and e R is represented as recall. ...
... (14) is used to determine the F-score. (14) Where, r P is denoted as precision and e R is represented as recall. ...
Nowadays, real-time Human Tracking System (HTS) is a crucial topic in computer vision and image processing with applications like robotic perception, scene understanding, video surveillance, image compression, medical image analysis, and augmented reality, among many others. In this paper, we design a Faster Region-based Convolutional Neural Network (FR-CNN) with Crow Search Optimization (FR-CNN-CSO) architecture to improve computational complexity and enhance the performance of HTS. The system is implemented in a Python environment with video input. Remove unnecessary data from the gathered datasets during preprocessing. Next, feature extraction is processed using Histograms of Oriented Gradients (HOG). Then update, the extracted features into a designed FR-CNN model for identifying and tracking a person using crow search fitness. The main goal of the developed approach is to attain accurate prediction results and improve the computational complexity by achieving less execution time. Finally, the experimental outcomes show the reliability of the designed system by other conventional techniques in terms of accuracy, precision, recall, F-measure, and execution time.
... The emerging field of extended reality (XR) technology, with its robust audio-visual-spatial capabilities and ad-vanced headsets, has garnered significant attention among researchers for applications such as hazard detection [29] and Visual Field (VF) expansion [30]. XR technology includes three main categories: augmented reality (AR), virtual reality (VR), and mixed reality (MR). ...
Visual field loss (VFL) is a persistent visual impairment characterized by blind spots (scotoma) within the normal visual field, significantly impacting daily activities for affected individuals. CurrentVirtual Reality (VR) and Augmented Reality (AR)-based visual aids suffer from low video quality, content loss, high levels of contradiction, and limited mobility assessment. To address these issues, we propose an innovative vision aid utilizing AR headset and integrating advanced video processing techniques to elevate the visual perception of individuals with moderate to severe VFL to levels comparable to those with unimpaired vision. Our approach introduces a pioneering optimal video remapping function tailored to the characteristics of AR glasses. This function strategically maps the content of live video captures to the largest intact region of the visual field map, preserving quality while minimizing blurriness and content distortion. To evaluate the performance of our proposed method, a comprehensive empirical user study is conducted including object counting and multi-tasking walking track tests and involving 15 subjects with artificially induced scotomas in their normal visual fields. The proposed vision aid achieves 41.56% enhancement (from 57.31% to 98.87%) in the mean value of the average object recognition rates for all subjects in object counting test. In walking track test, the average mean scores for obstacle avoidance, detected signs, recognized signs, and grasped objects are significantly enhanced after applying the remapping function, with improvements of 7.56% (91.10% to 98.66%), 51.81% (44.85% to 96.66%), 49.31% (43.18% to 92.49%), and 77.77% (13.33% to 91.10%), respectively. Statistical analysis of data before and after applying the remapping function demonstrates the promising performance of our method in enhancing visual awareness and mobility for individuals with VFL.
... Also, the was no mention that the system was tested with VIP. Recently, Younis et al. [103] proposed a context-awareness outdoor navigation aid for people with peripheral vision impairment. The context-awareness concept-which denotes the system's ability to learn about its surroundings and adjust behaviour accordingly-was used to develop a hazard detection and tracking system. ...
The development of many tools and technologies for people with visual impairment has become a major priority in the field of assistive technology research. However, many of these technology advancements have limitations in terms of the human aspects of the user experience (e.g., usability, learnability, and time to user adaptation) as well as difficulties in translating research prototypes into production. Also, there was no clear distinction between the assistive aids of adults and children, as well as between “partial impairment” and “total blindness”. As a result of these limitations, the produced aids have not gained much popularity and the intended users are still hesitant to utilise them. This paper presents a comprehensive review of substitutive interventions that aid in adapting to vision loss, centred on laboratory research studies to assess user-system interaction and system validation. Depending on the primary cueing feedback signal offered to the user, these technology aids are categorized as visual, haptics, or auditory-based aids. The context of use, cueing feedback signals, and participation of visually impaired people in the evaluation are all considered while discussing these aids. Based on the findings, a set of recommendations is suggested to assist the scientific community in addressing persisting challenges and restrictions faced by both the totally blind and partially sighted people.
... For any navigational assistance, the medium of instruction in Urdu would be warmly welcomed by the people of Pakistan, especially by the visually impaired. Locating and identifying the object in the video frames [56], visual placing [57], tracking [58], and many other state-ofthe-art sensor-based approaches [59][60][61] integrate with the language that is mostly understood to enhance the adaptability and improve the concept of autonomous navigation system for visually impaired in Pakistan. ...
Visually impaired individual faces many challenges when comes to object recognition and routing inside or out. Despite the availability of numerous visual assistance systems, the majority of these system depends on English auditory feedback, which is not effective for the Pakistani population, since a vast population of Pakistanis cannot comprehend the English language. The primary object of this study is to consolidate the present research related to the use of Urdu auditory feedback for currency and Urdu text detection to assist a visually impaired individual in Pakistan. The study conducted a comprehensive search of six digital libraries, resulting in 50 relevant articles published in the past five years. Based on the results, a taxonomy of visual assistance was developed, and general recommendations and potential research directions were provided. The study utilized firm inclusion/exclusion criteria and appropriate quality assessment methods to minimize potential biases. Results indicate that while most research in this area focuses on navigation assistance through voice audio feedback in English, the majority of the Pakistani population does not understand the language rendering such systems inefficient. Future research should prioritize object localization and tracking with Urdu auditory feedback to improve navigation assistance for visually impaired individuals in Pakistan. The study concludes that addressing the language barrier is crucial in developing effective visual assistance systems for the visually impaired in Pakistan.
... The actual human Field of View (FOV) shows peripheral vision loss according to the distance from the central vision. This is a very natural phenomenon of human vision and can be used as the most powerful way to focus interest on important subjects [19]. Most amateur photographers place the subject in the center of the frame and capture the audience's attention, even if there is no special background for the photo shoot [20]. ...
In recent times, group emotion recognition has attracted considerable research attention, and diverse applications such as situation-aware models and security surveillance models that require this capability have emerged. Group emotion recognition is a complex task that depends on human visual stimuli and correlation inference. We propose a novel group emotion recognition model using a fuzzy system based on psychological principles. Based on psychological factors defined by a fuzzy system, we classify group emotions into universal classes used to classify seven basic single face emotions: happiness, sadness, surprise, disgust, anger, fear, and neutral, and achieve approximately 14% higher accuracy than an average method in qualitative and quantitative experiments. Furthermore, the contribution of each factor to group emotion recognition was determined using the psychological principles commonly used in cinematography, at a low cost.
... The screen was characterized by a resolution of 1,280 Â 1,024 pixels, a refresh rate of 85 Hz, linearized contrast, and a luminance of 35 cd/m 2 (measured with J17 LumaColor Photometer, Tektronix). The target visual stimuli were presented in the form of the Gabor patch-a pattern of sinusoidal luminance grating displayed within a Gaussian envelope [full width at half maximum of 2.8 cm, i.e., 1 53 0 visual angle, with 7.3 cm, i.e., 4 55 0 presentation radius from the fixation cross, staying within the central vision, i.e., <8 radius; (54,55)]. The Gabor patch pattern consisted of 16 cycles with one cycle made up of one white and one black bar (grating spatial frequency of 8 c/degree). ...
Stochastic Resonance (SR) describes a phenomenon where an additive noise (stochastic carrier-wave) enhances the signal transmission in a nonlinear system. In the nervous system, nonlinear properties are present from the level of single ion channels all the way to perception and appear to support the emergence of SR. For example, SR has been repeatedly demonstrated for visual detection tasks, also by adding noise directly to cortical areas via transcranial random noise stimulation (tRNS). When dealing with nonlinear physical systems, it has been suggested that resonance can be induced not only by adding stochastic signals (i.e., noise) but also by adding a large class of signals that are not stochastic in nature which cause "deterministic amplitude resonance" (DAR). Here we mathematically show that high-frequency, deterministic, periodic signals can yield resonance-like effects with linear transfer and infinite signal-to-noise ratio at the output. We tested this prediction empirically and investigated whether non-random, high-frequency, transcranial alternating current stimulation applied to visual cortex could induce resonance-like effects and enhance performance of a visual detection task. We demonstrated in 28 participants that applying 80 Hz triangular-waves or sine-waves with tACS reduced visual contrast detection threshold for optimal brain stimulation intensities. The influence of tACS on contrast sensitivity was equally effective to tRNS-induced modulation, demonstrating that both tACS and tRNS can reduce contrast detection thresholds. Our findings suggest that a resonance-like mechanism can also emerge when deterministic electrical waveforms are applied via tACS.
... The screen was characterized by a resolution of 1280 Â 1024 pixels, refresh rate of 85 Hz, linearized contrast, and a luminance of 35 cd/m 2 (measured with J17 LumaColor Photometer, Tektronix). The target visual stimuli were presented on a uniform gray background in the form of a Gabor patch, a pattern of sinusoidal luminance grating displayed within a Gaussian envelope (full width at half maximum of 2.8 cm, i.e., 1°53' visual angle, with 7.3 cm, i.e., 4°55' presentation radius from the fixation cross, staying within the central vision, i.e., ,8°radius; Strasburger et al., 2011;Younis et al., 2019). The Gabor patch pattern consisted of 16 cycles with one cycle made up of one white and one black bars (grating spatial frequency of 8 cycles/deg). ...
Transcranial random noise stimulation (tRNS) has been shown to significantly improve visual perception. Previous studies demonstrated that tRNS delivered over cortical areas acutely enhances visual contrast detection of weak stimuli. However, it is currently unknown whether tRNS-induced signal enhancement could be achieved within different neural substrates along the retino-cortical pathway. In three experimental sessions, we tested whether tRNS applied to the primary visual cortex (V1) and/or to the retina improves visual contrast detection. We first measured visual contrast detection threshold (VCT; N = 24, 16 females) during tRNS delivery separately over V1 and over the retina, determined the optimal tRNS intensities for each individual (ind-tRNS), and retested the effects of ind-tRNS within the sessions. We further investigated whether we could reproduce the ind-tRNS-induced modulation on a different session ( N = 19, 14 females). Finally, we tested whether the simultaneous application of ind-tRNS to the retina and V1 causes additive effects. Moreover, we present detailed simulations of the induced electric field across the visual system. We found that at the group level tRNS decreases VCT compared with baseline when delivered to the V1. Beneficial effects of ind-tRNS could be replicated when retested within the same experimental session but not when retested in a separate session. Applying tRNS to the retina did not cause a systematic reduction of VCT, regardless of whether the individually optimized intensity was considered or not. We also did not observe consistent additive effects of V1 and retina stimulation. Our findings demonstrate significant tRNS-induced modulation of visual contrast processing in V1 but not in the retina.