Semantic image segmentation is an essential task for autonomous vehicles and self-driving cars where a complete and real-time perception of the surroundings is mandatory. Convolutional Neural Network approaches for semantic segmentation standout over other state-of-the-art solutions due to their powerful generalization ability over unknown data and end-to-end training. Fisheye images are important due to their large field of view and ability to reveal information from broader surroundings. Nevertheless, they pose unique challenges for CNNs, due to object distortion resulting from the Fisheye lens and object position. In addition, large annotated Fisheye datasets required for CNN training is rather limited. In this paper, we investigate the use of Deformable convolutions in accommodating distortions within Fisheye image segmentation for fully residual U-net by learning unknown geometric transformations via variable shaped and sized filters. The proposed models and integration strategies are exploited within two main paradigms: single(front)-view and multi-view Fisheye images segmentation. The validation of the proposed methods is conducted on synthetic and real Fisheye images from the WoodScape and the SynWoodScape datasets. The results validate the significance of the Deformable fully residual U-Net structure in learning unknown geometric distortions in both paradigms, demonstrate the possibility in learning view-agnostic distortion properties when trained on the multi-view data and shed light on the role of surround-view images in increasing segmentation performance relative to the single view. Finally, our experiements suggests that Deformable convolutions are a powerful tool that can increase the efficiency of fully residual U-Nets for semantic segmentation of automotive fisheye images.
Despite recent breakthroughs in the domain of implicit generative models, the task of evaluating these models remains a challenging task. With no single metric to assess overall performance, various existing metrics only offer partial information. This issue is further compounded for unintuitive data types such as time series, where manual inspection is infeasible. This deficiency hinders the confident application of modern implicit generative models on time series data. To alleviate this problem, we propose two new metrics, the InceptionTime Score (ITS) and the Fréchet InceptionTime Distance (FITD), to assess the quality of class-conditional generative models on time series data. We conduct extensive experiments on 80 different datasets to study the discriminative capabilities of proposed metrics alongside two existing evaluation metrics: Train on Synthetic Test on Real (TSTR) and Train on Real Test on Synthetic (TRTS). Our evaluations reveal that the proposed assessment evaluation metrics, i.e., ITS and FITD in combination with TSTR, can accurately assess class-conditional generative model performance and detect common issues in implicit generative models. Our findings suggest that the proposed evaluation framework can be a valuable tool for confidently applying modern implicit generative models in time series analysis.
In product development, it is of great importance that a complete, unambiguous, and, as far as possible, contradiction-free target system is defined. Requirements documents of complex systems can contain several thousand individual requirements, derived in an interdisciplinary manner and written in natural language by many different stakeholders. Hence, errors, in the form of contradictions, cannot be completely avoided in these documents and today they must be corrected manually with high effort. This paper presents an important building block for automated contradiction detection and quality analysis of requirements documents. We discuss the necessary identification of conditions in requirements and the extraction of the verbal expressions associated with condition and effect, respectively. We applied and analyzed natural language processing methods based on grammatical versus machine learning models. The models have been applied to 1,861 real-world requirements. Both approaches generate promising results, with an accuracy partly over 98%. However, in structured specification texts, a grammatical model is preferable due to lower effort in preprocessing and better usability.
Machine learning techniques such as neural networks bear the potential to improve the performance and applicability of model predictive control to real-world systems. However, they also bear the danger of erratic-unpredictable behavior and malfunctioning of machine learning approaches. Neural networks might fail to predict system behavior as it is often impossible to provide strict performance or uncertainty bounds. While this challenge can be tackled using robust model predictive control approaches that span a safety net around the machine learning supported predictions, this can lead to significant performance degradation and infeasibility. To tackle this challenge, a safe neural network-supported learning tube model predictive control scheme is proposed, which allows bounding the worst-case performance in case of a malfunctioning machine learning component, yet enables decreasing the conservatism. The basic idea is to enforce the neural network to stay in the vicinity of a presumed given nominal model with the error dynamics directly incorporated into the neural network output function. Therefore, the error dynamics do not require an additional control input, resulting in the omission of input constraint tightening. Constraint fulfillment is guaranteed, robust set stability for a particular learning function class is established, and an upper bound for the performance of a malfunctioning neural network is given. The method is evaluated in simulations considering a rover operating in an uncertain environment.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.