Lab
Lab. of Adaptive Lighting Systems and Visual Processing
About the lab
The Laboratory of Adaptive Lighting Systems and Visual Processing (ALSVV) is part of the Department of Electrical Engineering and Information Technology. The Lab. has been teaching and researching various areas of lighting technology since its foundation in 1956.
Research at the Lab. focuses on the interaction between people and light in different contexts. The Lab. conducts both application-oriented and basic research. In addition to the visual and non-visual effects of light on humans and other biological organisms, research is also being carried out into adaptive lighting solutions for the interior, horticultural, and automotive sectors. Furthermore, semiconductor light sources are analyzed and modeled to better understand and predict their electrical and lighting behavior.
Research at the Lab. focuses on the interaction between people and light in different contexts. The Lab. conducts both application-oriented and basic research. In addition to the visual and non-visual effects of light on humans and other biological organisms, research is also being carried out into adaptive lighting solutions for the interior, horticultural, and automotive sectors. Furthermore, semiconductor light sources are analyzed and modeled to better understand and predict their electrical and lighting behavior.
Featured research (36)
In this work, an experiment was designed in which a defined route consisting of country roads, highways, and urban roads was driven by 20 subjects during the day and at night. The test vehicle was equipped with GPS and a camera, and the subject wore head-mounted eye-tracking glasses to record gaze. Gaze distributions for country roads, highways, urban roads, and specific urban roads were then calculated and compared. The day/night comparisons showed that the horizontal fixation distribution of the subjects was wider during the day than at night over the whole test distance. When the distributions were divided into urban roads, country roads, and motorways, the difference was also seen in each road environment. For the vertical distribution, no clear differences between day and night can be seen for country roads or urban roads. In the case of the highway, the vertical dispersion is significantly lower, so the gaze is more focused. On highways and urban roads there is a tendency for the gaze to be lowered. The differentiation between a residential road and a main road in the city made it clear that gaze behavior differs significantly depending on the urban area. For example, the residential road led to a broader gaze behavior, as the sides of the street were scanned much more often in order to detect potential hazards lurking between parked cars at an early stage. This paper highlights the contradictory results of eye-tracking research and shows that it is not advisable to define a holy grail of gaze distribution for all environments. Gaze is highly situational and context-dependent, and generalized gaze distributions should not be used to design lighting functions. The research highlights the importance of an adaptive light distribution that adapts to the traffic situation and the environment, always providing good visibility for the driver and allowing a natural gaze behavior.
Thermopile sensor arrays provide a sufficient counterbalance between person detection and localization while preserving privacy through low resolution. The latter is especially important in the context of smart building automation applications. Current research has shown that there are two machine learning-based algorithms that are particularly prominent for general object detection: You Only Look Once (YOLOv5) and Detection Transformer (DETR). Over the course of this paper, both algorithms are adapted to localize people in 32 × 32-pixel thermal array images. The drawbacks in precision due to the sparse amount of labeled data were counteracted with a novel generative image generator (IIG). This generator creates synthetic thermal frames from the sparse amount of available labeled data. Multiple robustness tests were performed during the evaluation process to determine the overall usability of the aforementioned algorithms as well as the advantage of the image generator. Both algorithms provide a high mean average precision (mAP) exceeding 98%. They also prove to be robust against disturbances of warm air streams, sun radiation, the replacement of the sensor with an equal type sensor, new persons, cold objects, movements along the image frame border and people standing still. However, the precision decreases for persons wearing thick layers of clothes, such as winter clothing, or in scenarios where the number of present persons exceeds the number of persons the algorithm was trained on. In summary, both algorithms are suitable for detection and localization purposes, although YOLOv5m has the advantage in real-time image processing capabilities, accompanied by a smaller model size and slightly higher precision.
Vision science imposes rigorous requirements for the design and execution of psychophysical studies and experiments. These requirements ensure precise control over variables, accurate measurement of perceptual responses, and reproducibility of results, which are essential for investigating visual perception and its underlying mechanisms. Since different experiments have different requirements, not all aspects of a display system are critical for a given setting. Therefore, some display systems may be suitable for certain types of experiments but unsuitable for others. An additional challenge is that the performance of consumer systems is often highly dependent on specific monitor settings and firmware behavior. Here, we evaluate the performance of four display systems: a consumer LCD gaming monitor, a consumer OLED gaming monitor, a consumer OLED TV, and a VPixx PROPixx projector system. To allow the reader to assess the suitability of these systems for different experiments, we present a range of different metrics: luminance behavior, luminance uniformity across display surface, estimated gamma values and linearity, channel additivity, channel dependency, color gamut, pixel response time, and pixel waveform. In addition, we exhaustively report the monitor firmware settings used. Our analyses show that current consumer-level OLED display systems are promising, and adequate to fulfill the requirements of some critical vision science experiments, allowing laboratories to run their experiments even without investing in high-quality professional display systems. For example, the tested Asus OLED gaming monitor shows excellent response time, a sharp square waveform even at 240 Hz, a color gamut that covers 94% of DCI-P3 color space, and the best luminance uniformity among all four tested systems, making it a favorable option on price-to-performance ratio.
Successful communication between highly automated vehicles and vulnerable road users will be crucial in the future. In addition to the technical requirements of the communication system, the projected content is also essential to ensure successful communication. For this purpose, previous studies have investigated the necessary technical requirements for near-field projections. However, the impact of the presentation content, whether symbol- or text-based, on the technological domain, has yet to be investigated. Therefore, a psychophysical subject study investigated the necessary detection probability for symbol- and text-based projection in the near-field of a vehicle. The visibility of symbol- and text-based projections were analyzed by the subject’s detection rate of the tested projection in an ambient lighting scenario of 20 lx at two different distances. Additionally, the corresponding reaction time of the subjects was measured. The results of the subject study showed that, contrarily, an arbitrary increase does not reduce the reaction time and thus saturates at a level of 650 ms before the 90% detection threshold for both projection contents. The observed detection contrast indicates that symbol-based projections need approximately 25% less contrast level than text-based projections to reach a 90% detection rate.
Lab head
Members (19)
Peter Bodrogi
Julia Maria Schikowski