Fraunhofer Institute of Optronics, System Technologies and Image Exploitation IOSB
Recent publications
Participatory Design means recognizing that those who will be affected by a future technology should have an active say in its creation. Yet, despite continuous interest in involving people as future users and consumers into designing novel and innovative future technology, participatory approaches in technology design remain relatively underdeveloped in the German HCI community. This article brings together the diversity of voices, domains, perspectives, approaches, and methods that collectively shape Participatory Design in Germany. In the following, we (1) outline our understanding of participatory practice and how it is different from mere user involvement; (2) reflect current issues of participatory and fair technology design within the German Participatory Design community; and (3) discuss tensions relevant to the field, that we expect to arise in the future, and which we derived from our 2021 workshop through a speculative method. We contribute an introduction and an overview of current themes and a speculative outlook on future issues of Participatory Design in Germany. It is meant to inform, provoke, inspire and, ultimately, invite participation within the wider Computer Science community.
The online classification of grid disturbances is an important prerequisite for an automated and reliable operation of power transmission systems. Most of the state‐of‐the‐art approaches assume that all classes are already known in the training phase and cannot handle new disturbance events, which appear in the application phase and lead to severe misclassifications. To mitigate this shortcoming, the disturbance detection is investigated as an open classification task and a novel recurrent Siamese neural network architecture is introduced to identify and locate known and unknown disturbance events from phasor measurements. Extending preliminary work, a probabilistic distance‐based classification approach with an integrated rejection mechanism is presented, which enables to learn class‐dependent decision boundaries and margins to reduce the open‐set risk. A detailed performance analysis is presented including multiple benchmark methods in different closed‐set and open‐set classification tasks for a simulated power transmission system. Additionally, a limited and full observability of the grid with phasor measurements are addressed in the experiments.
The Electricity Supply Chain is a system of enabling procedures to optimize processes ranging from production to transportation and consumption of electricity. The proportion of distributed energy sources within the electricity system increases steadily, which necessitates an improved monitoring capability to ensure the overall reliability and quality of the Electricity Supply Chain. Automation is strongly required to process the growing amount of data. Thus, it is inevitable to handle large amounts of heterogeneous data and process the information using forecasting and optimization techniques. Artificial Intelligence techniques are crucial for extending human cognitive abilities in these tasks. In our work, we synthesize the main impacts of the Artificial Intelligence paradigm on the automation of the Electricity Supply Chain. We describe the emerging automation through Artificial Intelligence in every layer of the Smart Grid Architecture Model and highlight state-of-the-art approaches. In the review, we focus on the following Electricity Supply Chain functionalities: generation, maintenance, pre-processing, analysis, forecasting, optimization, and trading within energy systems. After investigating the individual perspectives, we examine the potential implementation of a fully automated Electricity Supply Chain. Lastly, we discuss perspectives and limitations for the transformation from conventional to automated Electricity Supply Chains, specifically in terms of human interaction, Artificial Intelligence adaptation, energy transition, and sustainability.
In Mensch-Maschine-Systemen wirken die (teil)automatisierte Maschine, eine Schnittstelle zwischen Mensch und Maschine sowie Menschen mit ihren Wahrnehmungs-, Kognitions- und Handlungsfähigkeiten über einen Mensch-Maschine-Dialog zusammen. Kenntnis über die Informationsverarbeitung durch den Menschen in diesem Dialog spielt eine wesentliche Rolle für die systematische Gestaltung von Mensch-Maschine-Systemen. Dabei kommt der Informationsaufnahme eine besondere Bedeutung zu, die mittels Technologien der Virtual und Augmented Reality sowie Ambient Intelligence wirksam unterstützt werden kann.
e18750 Background: Data on SARS-CoV-2 infections in oncological patients in the outpatient settings are scarce. Methods: During the spread of the delta variant between April 2021 and September 2021, a total of 10.677 patients were tested for SARS-CoV-2 infection by RT-qPCR in seven outpatient clinics in Bavaria, Germany. Results: Within the tested patient cohort, 4.960 patients (46.5%) suffered from a malignant disease (74% solid tumors and 26% malignant hematological diseases). This group was compared with 5.717 patients (53.5%) without a malignant disease (33.1% with other hematological diseases and 66.9% patients without a hematological or oncological disease). During the observation period, 119 (2.4%) patients with malignancies were tested positive (88 patients with solid tumors; 31 patients with malignant hematological diseases) compared to 115 positive patients (2.0%) in the control group. 32 of 119 positively tested patients (26.9%) suffering from malignant disease required hospitalization and 9/32 patients (28.1%) died during the clinical course. Conclusions: These observations are in clear contrast to data from patients we evaluated during the pre-delta variants period between 15 and 26 April 2020 in the same seven outpatient clinics. In this period, a total of 1.227 patients were tested for SARS-CoV-2 by RT-qPCR. 78/1227 patients (6.3%) were tested positive in RT-qPCR and most showed mild symptoms of infection. None of the SARS-CoV-2 infected patients died. These data were analyzed when no vaccination was available. These data were evaluated during a period where no vaccine was available. Vaccination of patients with malignancies with BiontechPfizer's mRNA vaccines was started in April 2021. The response to the vaccine was tested by an antibody assay (Elecsys Anti-SARS-CoV-2 S-immunoassay, Roche) at the earliest four weeks after the second vaccination. To assess the response, we compared five patient cohorts: Patients who received (i) B cell depleting antibodies, (ii) checkpoint inhibitors (ICI), (iii) chemotherapy, or (iv) tyrosin kinase inhibitors (TKIs), and (v) healthy controls. The patients treated with ICI or TKI showed a comparable vaccination response to the healthy patients, while patients receiving Rituximab/Obinutuzumab showed no significant humoral vaccination response at all. The more severe disease course of patients infected by the SARS-CoV-2 delta variant compared to the initial waves of infections strongly underline the importance of vaccination in cancer patients.
Immersive technologies, such as virtual reality, enable users to view and evaluate three-dimensional content, e.g., geographic data. Besides navigating this data at a life-size scale, a tabletop display offers a better overview of a larger area. This paper presents six different techniques to interact with immersive digital map table displays, i.e., panning, rotating, and zooming the map and indicating a position. The implemented interaction methods were evaluated in a user study with 12 participants. The results show that using a virtual laser pointer in combination with the buttons and joystick on a controller yields the best results regarding interaction time, workload, and user preference. The user study also shows that interaction methods should be customizable so that users can adapt them to their abilities. However, the proposed virtual laser pointer technique achieves a good balance between physical and cognitive effort and yields good results for users with varying experience levels.
The coherence image as a product of a coherent SAR image pair can expose even subtle changes in the surface of a scene, such as vehicle tracks. For machine learning models, the large amount of required training data often is a crucial issue. A general solution for this is data augmentation. Standard techniques, however, were predominantly developed for optical imagery, thus do not account for SAR specific characteristics and thus are only partially applicable to SAR imagery. In this paper several data augmentation techniques are investigated for their performance impact regarding a CNN based vehicle track detection with the aim of generating an optimized data set. Quantitative results are shown on the performance comparison. Furthermore, the performance of the fully-augmented data set is put into relation to the training with a large non-augmented data set.
Building modeling from remote sensing data is essential for creating accurate 3D and 4D digital twins, especially for temperature modeling. In order to represent buildings as gap-free, visually appealing, and rich in details models, geo-typical prototypes should be represented in the scene. The sensor data and freely available OSM data are supposed to provide guidelines for best-possible matching. In this paper, the default similarity function based on intersection over union is extended by terms reflecting the similarity of elevation values, orientation towards the road, and trees in the vicinity. The goodness of fit has been evaluated by architecture experts as well as thermal simulations with a thermal image as ground truth and error measures based on mean average error, root mean square and mutual information. It could be concluded that while intersection over union measure still seems to be most preferred by architects, slightly better thermal simulation results are yielded by taking into account all similarity functions.
Insufficient amount or complete absence of reference data for the training of classifiers is a general topic. Especially the state-of-the-art deep learning approaches have to deal with the availability or adaption of this reference data to produce the reliable results they are designed for. This paper will pursue different approaches according to the absence of training data for land cover classification from aerial images. First, we will analyze the performance of traditional classification in the absence of reference data using clustering techniques and salient features for the assignment of semantic labels. Second, we will transfer the results as training data to a DeepLabv3+ CNN with pre-trained weights to demonstrate the usability of the generated training data. Third, we expand the clustering approaches and combine them with a Random Forest classifier. Finally, if user interaction and manual annotation of training data are still necessary, we also introduce our labeling GUI that enables a simple, fast, and comfortable training data generation with only a few clicks. To evaluate our procedure, we used two datasets, including the Vaihingen benchmark, for which ground truth is available. Without any interactive steps except setting a few algorithm paremeters, we achieved an overall accuracy of 75% using the Deeplab method with image data only.
The dynamic operation of power transmission systems requires the acquisition of reliable and accurate measurement and state information. The use of TCP/IP-based communication protocols such as IEEE C37.118 or IEC 61850 introduces different gateways to launch cyber-attacks and to compromise major system operation functionalities. Within this study, a combined network intrusion and phasor data anomaly detection system is proposed to enable a secure system operation in the presence of cyber-attacks for dynamic control centers. This includes the utilization of expert-rules, one-class classifiers, as well as recurrent neural networks to monitor different network packet and measurement information. The effectiveness of the proposed network intrusion and phasor data anomaly detection system is shown within a real-time simulation testbed considering multiple operation and cyber-attack conditions.
Dynamic Vision Sensoren (DVS) unterscheiden sich von herkömmlichen Kameras darin, dass nur die Intensitätsänderungen einzelner Pixel wahrgenommen und als asynchrone Events übertragen werden. Es entsteht kein gesamtes Intensitätsbild. Die Technologie verspricht unter anderem eine hohe zeitliche Auflösung, geringe Latenzzeiten und Datenraten. Während derartige Sensoren derzeit viel wissenschaftliche Aufmerksamkeit genießen, gibt es nur wenige Veröffentlichungen, die ihren Erfolg in der Praxis belegen. Ein Anwendungsbereich, der bisher kaum betrachtet wurde, aber aufgrund seiner besonderen Eigenschaften besonders für den Einsatz von DVS erscheint, ist die automatische Sichtprüfung. In dieser Arbeit werden bestehende Event-basierte Algorithmen evaluiert, auf das neue Anwendungsgebiet angepasst und erprobt. Darüber hinaus wird ein algorithmischer Ansatz präsentiert, der auf Basis von Events das optimale Zeitfenster für eine Objektklassifizierung bestimmt. Zur Evaluierung der Methoden werden zwei neue Datensätze generiert, die typische Szenarien der automatischen Sichtprüfung abdecken, wie beispielsweise die Klassifizierung von texturierten Objekten auf einem Förderband und im freien Fall. Die Ergebnisse zeigen, dass die Zeitfensteroptimierung die Korrektklassifizierungsrate bestehender Algorithmen deutlich erhöht. Darüber hinaus wird aufgezeigt, dass DVS aufgrund ihrer intrinsischen Eigenschaften neue Möglichkeiten im Bereich der automatischen Sichtprüfung bieten.
Adaptive optics systems are used to compensate for wavefront distortions introduced by atmospheric turbulence. The distortions are corrected by an adaptable device, normally a deformable mirror. The control signal of the mirror is based on the measurement delivered by a wavefront sensor. Relevant characteristics of the wavefront sensor are the measurement accuracy, the achievable measurement speed and the robustness against scintillation. The modal holographic wavefront sensor can theoretically provide the highest bandwidth compared to other state of the art wavefront sensors and it is robust against scintillation effects. However, the measurement accuracy suffers from crosstalk effects between different aberration modes that are present in the wavefront. In this paper we evaluate whether the sensor can be used effectively in a closed-loop AO system under realistic turbulence conditions. We simulate realistic optical turbulence represented by more than 2500 aberration modes and take different signal-to-noise ratios into account. We determine the performance of a closed-loop AO system based on the holographic sensor. To counter the crosstalk effects, careful choice of the key design parameters of the sensor is necessary. Therefore, we apply an optimization method to find the best sensor design for maximizing the measurement accuracy. By modifying this method to take the changing effective turbulence conditions during closed-loop operation into account, we can improve the performance of the system, especially for demanding signal-to-noise-ratios, even more. Finally, we propose to implement multiple holographic wavefront sensors without the use of additional hardware, to perform multiple measurement at the same time. We show that the measurement accuracy of the sensor and with this the wavefront flatness can be increased significantly without reducing the bandwidth of the adaptive optics system.
Virtual non-calcium (VNCa) images from dual-energy computed tomography (DECT) have shown high potential to diagnose bone marrow disease of the spine, which is frequently disguised by dense trabecular bone on conventional CT. In this study, we aimed to define reference values for VNCa bone marrow images of the spine in a large-scale cohort of healthy individuals. DECT was performed after resection of a malignant skin tumor without evidence of metastatic disease. Image analysis was fully automated and did not require specific user interaction. The thoracolumbar spine was segmented by a pretrained convolutional neuronal network. Volumetric VNCa data of the spine’s bone marrow space were processed using the maximum, medium, and low calcium suppression indices. Histograms of VNCa attenuation were created for each exam and suppression setting. We included 500 exams of 168 individuals (88 female, patient age 61.0 ± 15.9). A total of 8298 vertebrae were segmented. The attenuation histograms’ overlap of two consecutive exams, as a measure for intraindividual consistency, yielded a median of 0.93 (IQR: 0.88–0.96). As our main result, we provide the age- and sex-specific bone marrow attenuation profiles of a large-scale cohort of individuals with healthy trabecular bone structure as a reference for future studies. We conclude that artificial-intelligence-supported, fully automated volumetric assessment is an intraindividually robust method to image the spine’s bone marrow using VNCa data from DECT.
Digitization is becoming more and more important in the medical sector. Through electronic health records and the growing amount of digital data of patients available, big data research finds an increasing amount of use cases. The rising amount of data and the imposing privacy risks can be overwhelming for patients, so they can have the feeling of being out of control of their data. Several previous studies on digital consent have tried to solve this problem and empower the patient. However, there are no complete solution for the arising questions yet. This paper presents the concept of Sovereign Digital Consent by the combination of a consent privacy impact quantification and a technology for proactive sovereign consent. The privacy impact quantification supports the patient to comprehend the potential risk when sharing the data and considers the personal preferences regarding acceptance for a research project. The proactive dynamic consent implementation provides an implementation for fine granular digital consent, using medical data categorization terminology. This gives patients the ability to control their consent decisions dynamically and is research friendly through the automatic enforcement of the patients’ consent decision. Both technologies are evaluated and implemented in a prototypical application. With the combination of those technologies, a promising step towards patient empowerment through Sovereign Digital Consent can be made.
With the vision “Innovation by experiment” the Bauhaus.MobilityLab started in July 2020 as a living lab in the district Brühl of the city Erfurt, Thuringia, Germany. As a unique project, it is coupling the sectors mobility, logistics and energy into a unified living lab. It allows to design, develop and evaluate innovative services to increase the quality of life in the city. Bauhaus.MobilityLab offers access to live smart city data of different domains and provides a set of powerful artificial intelligence (AI) algorithms for data processing, analytics and forecasting. In contrast to existing platforms, its uniqueness is the available and integrated living lab. It allows directly rolling out new smart city services and to evaluate the impact in the real world. This paper describes the implementation of the technical platform supporting the Bauhaus.MobilityLab, realized according to the DIN SPEC 91357 as an open urban platform. It focuses on data sharing based on the concepts of the International Data Spaces and the integration of AI algorithms. The concepts are presented based on examples in the energy domain.
The automated identification and localisation of grid disturbances is a major research area and key technology for the monitoring and control of future power systems. Current recognition systems rely on sufficient training data and are very error-prone to disturbance events, which are unseen during training. This study introduces a robust Siamese recurrent neural network using attention-based embedding functions to simultaneously identify and locate disturbances from synchrophasor data. Additionally, a novel double-sigmoid classifier is introduced for reliable differentiation between known and unknown disturbance types and locations. Different models are evaluated within an open-set classification problem for a generic power transmission system considering different unknown disturbance events. A detailed analysis of the results is provided and classification results are compared with a state-of-the-art open-set classifier.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
238 members
Alexander Streicher
  • Department of Interoperability and Assistance Systems (IAS)
Christian Eisele
  • Department of Signatorics (SIG)
Julius Pfrommer
  • Department of Information Management and Production Control (ILT)
Michael Arens
  • Department of Object Recognition (OBJ)
Marcus Hebel
  • Department of Object Recognition (OBJ)
Fraunhoferstraße 1, 76131, Karlsruhe, Germany
Head of institution
Prof. Dr.-Ing. habil. Jürgen Beyerer
+49 721 6091-0
+49 721 6091-413