Kerstin Schill’s research while affiliated with University of Bremen and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (81)


Robot Pouring: Identifying Causes of Spillage and Selecting Alternative Action Parameters Using Probabilistic Actual Causation
  • Preprint

February 2025

·

1 Read

·

Jonas Krumme

·

Christoph Zetzsche

·

[...]

·

Kerstin Schill

In everyday life, we perform tasks (e.g., cooking or cleaning) that involve a large variety of objects and goals. When confronted with an unexpected or unwanted outcome, we take corrective actions and try again until achieving the desired result. The reasoning performed to identify a cause of the observed outcome and to select an appropriate corrective action is a crucial aspect of human reasoning for successful task execution. Central to this reasoning is the assumption that a factor is responsible for producing the observed outcome. In this paper, we investigate the use of probabilistic actual causation to determine whether a factor is the cause of an observed undesired outcome. Furthermore, we show how the actual causation probabilities can be used to find alternative actions to change the outcome. We apply the probabilistic actual causation analysis to a robot pouring task. When spillage occurs, the analysis indicates whether a task parameter is the cause and how it should be changed to avoid spillage. The analysis requires a causal graph of the task and the corresponding conditional probability distributions. To fulfill these requirements, we perform a complete causal modeling procedure (i.e., task analysis, definition of variables, determination of the causal graph structure, and estimation of conditional probability distributions) using data from a realistic simulation of the robot pouring task, covering a large combinatorial space of task parameters. Based on the results, we discuss the implications of the variables' representation and how the alternative actions suggested by the actual causation analysis would compare to the alternative solutions proposed by a human observer. The practical use of the analysis of probabilistic actual causation to select alternative action parameters is demonstrated.


PRORETA 5 – building blocks for automated urban driving enhancing city road safety

April 2024

·

25 Reads

at - Automatisierungstechnik

In the joint research project PRORETA 5, building blocks for automated driving in urban areas have been developed, implemented, and tested. The developed blocks involve an object tracking for cars, bicycles, and pedestrians that feeds a multimodal object prediction which is able to predict the traffic participants’ most likely trajectories. Then, an anytime tree-based planning algorithm calculates the vehicle’s desired path. Finally, logic-based safety functions ensure a collision-free trajectory for the ego vehicle. The mentioned building blocks were integrated and tested in a prototype vehicle in urban scenarios. Furthermore, a novel general framework for specifying and testing traffic rule compliance has been developed. In this paper, the automated driving concept of PRORETA 5 is introduced and the developed methods are briefly explained.




Fig. 5. Simulated maneuver generated by the motion planner involving an inversion of the driving direction and considering the predicted motion of a moving object (approximated by red circles). The colored points describe the past trajectories of both vehicles with the colors representing the speed values at the corresponding time. Data about the vehicle environment is given in black, the automatically generated free-space polygon is shown in orange and the (dynamic) Voronoi edges in green accordingly.
Fig. 6. A vehicle steered by the OPA 3 L software in Borgfeld, a suburb of Bremen, within the CARLA simulator.
Fig. 8. Sensor data displayed in the remote control center. LiDAR scans are rendered as point clouds, cameras are mapped to planes.
Fig. 9. c GeoBasis-DE / Landesamt GeoInformation Bremen 2019. Driven route (yellow) overlaid with the lanes (red) provided by the strategic decision making. The route starts in the east and leads to a turning circle to the west. After passing through it, the route returns to the starting point.
Fig. 10. Example of the evidential dynamic map. Green areas are observed as free, static cells are marked red, while dynamic cells are blue. Cyan represents areas with no information in the current scan, where previously information was available. There are three dynamic objects in the scene, which are detected and marked with a purple bounding box. Additionally there are a number of parked vehicles, which are correctly excluded from the detection.

+3

The OPA 3 L System and Testconcept for Urban Autonomous Driving
  • Conference Paper
  • Full-text available

October 2022

·

39 Reads

·

6 Citations

Download





Evaluation of Measurement Space Representations of Deep Multi-Modal Object Detection for Extended Object Tracking in Autonomous Driving

November 2020

·

16 Reads

·

5 Citations

The perception ability of automated systems such as autonomous cars plays an outstanding role for safe and reliable functionality. With the continuously growing accuracy of deep neural networks for object detection on one side and the investigation of appropriate space representations for object tracking on the other side both essential perception parts received special research attention within the last years. However, early fusion of multiple sensors turns the determination of suitable measurement spaces into a complex and not trivial task. In this paper, we propose the use of a deep multi-modal object detection network for the early fusion of LiDAR and camera data to serve as a measurement source for an extended object tracking algorithm on Lie groups. We develop an extended Kalman filter and model the state space as the direct product Aff(2) × ℝ 6 incorporating second- and third-order dynamics. We compare the tracking performance of different measurement space representations-SO(2) × ℝ 4 , SO(2) 2 × ℝ 3 and Aff(2)-to evaluate, how our object detection network encapsulates the measurement parameters and the associated uncertainties. With our results, we show that the lowest tracking errors in the case of single object tracking are obtained by representing the measurement space by the affine group. Thus, we assume that our proposed object detection network captures the intrinsic relationships between the measurement parameters, especially between position and orientation.


Citations (60)


... It takes about eight video cycles (20 to 30 msec) until the images can be interpreted correctly again. This corresponds to the delay time we humans also notice in our biological vision system [15]. ...

Reference:

Evolution of the “4-D Approach” to Dynamic Vision for Vehicles
Temporal and spatial constraints for mental modelling

... Frequent mowing at a low height can reduce lawn floral resources useful for pollinators [12]. Instead, regarding mowing trajectories, systematic cutting models provide the opportunity to reduce redundant movements and optimize the theoretical surface coverage, improving cutting distribution and minimizing untreated areas [13]. Previous studies have shown that autonomous mowers equipped with advanced technologies, such as GNSS navigation systems, can contribute to improving energy and work efficiency [14,15]. ...

Coverage Path Planning and Precise Localization for Autonomous Lawn Mowers
  • Citing Conference Paper
  • December 2022

... In particular, the ⊞-operator is used to generate the sigma points in (37), the measurement function h : S → Z in (38) is defined on manifold spaces, and the iterative algorithm of Table 1 is applied to compute the expected measurement in (39). Furthermore, the ⊟-operator is utilized to determine the difference between the sigma points and the expected measurement in the QR decomposition in (40) and in the Cholesky up-or downdate in (41). The same applies to the calculation of the cross-covariance in (42), where the ⊟-operator is used as well. ...

The OPA 3 L System and Testconcept for Urban Autonomous Driving

... However, most of the work is based on the nonlinear optimization method, which is timeconsuming and does not focus on consistency. In terms of filter based methods, [29] performs additional updates using speed and steering measurements from wheel odometers based on Multi-State Constraint Kalman Filter (MSCKF). Lee et al. propose a consistent VIWO along with both extrinsic and intrinsic calibration of the wheel encoder and analyze the observability [2]. ...

Visual-Inertial Odometry aided by Speed and Steering Angle Measurements
  • Citing Conference Paper
  • July 2022

... Consequently, extensive research has been conducted to develop various positioning systems and techniques for mobile robots [2] and self-driving cars [3] using sensors [4]- [6] such as wheel odometry, ultrasonic odometry, GNSS, inertial navigation system (INS), and visual odometry (VO). It is also crucial to note that these methods are not restrictive, as odometry methods can be multi-modal, i.e., sensor fusion of different sensors can be combined in a single algorithm [7], [8]. Each system possesses certain strengths and weaknesses; for instance, despite being simple to implement, wheel odometry suffers from position drift due to wheel slippage [4]. ...

Visual-Multi-Sensor Odometry with Application in Autonomous Driving
  • Citing Conference Paper
  • April 2021

... However, identifying objects in real-world images can be difficult due to various factors such as lighting, object size and location, object distortions, and issues with imbalance involving both class and objective imbalance. Therefore, an improved Deep Multi-modal 3D detection system is proposed in [3] which utilizes LiDAR and image data. Current detection techniques which employ the sense of heterogeneous data from several sensors are investigated with real and simulated information in [4] to find out the effectiveness of these methods. ...

Improving Deep Multi-modal 3D Object Detection for Autonomous Driving
  • Citing Conference Paper
  • February 2021

... However, annotating naturalistic expressions and their contexts can be more complicated and labor-intensive than laboratory datasets. Advanced multimodal annotation tools [88] may help provide multimodal annotation to evaluate facial expressions together with other nonverbal modalities and rich contextual information to provide accurate portrayals of the interaction between facial expressions and contexts. In the following section, we discuss the novel applications of MLLMs that could circumvent the need for extensively annotated datasets, fostering further advancement in naturalistic affective research. ...

From Human to Robot Everyday Activity
  • Citing Conference Paper
  • October 2020

... Some works [1]- [4], [44] demonstrate that combining LiDAR point clouds with camera images significantly improves performance in object detection. Other methods [5], [6], [45] focus on semantic segmentation, while some [7], [8], [46] utilize multi-modal information for object tracking. ...

Evaluation of Measurement Space Representations of Deep Multi-Modal Object Detection for Extended Object Tracking in Autonomous Driving
  • Citing Conference Paper
  • November 2020

... One example is extended object tracking, where the state of a dynamic object (e.g., another traffic participant) is estimated. This state consists of a position, orientation, size, and velocity, and thus, it is an element of a manifold [44]. Another possible application is maritime robotics, where we aim to use the proposed approach for localizing an unmanned surface vehicle. ...

Extended Object Tracking on the Affine Group Aff(2)
  • Citing Conference Paper
  • July 2020