Proceedings of the ARW 2022
ResearchGate has not been able to resolve any citations for this publication.
Formal verification represents an essential concept of mathematically proving or disproving the correctness of a system based on previously defined specifications. Applied to robotic workflows it can be used to prove their functional correctness, where it gains particular importance with the introduction of robot programming types for non-experts. In this paper, a software toolchain for modeling and transforming robotic workflows into formally verifiable model representations is presented. A graphical way of modeling robotic workflows in a distinctive way with a subsequent automatic transformation into verifiable code form the core of the presented toolchain. A software for generating formal specifications based on a modelled robotic workflow completes the toolchain presented in this work. The output artifacts of the particular software parts eventually allow a formal verification of robotic work-flows against a desired behavior represented by the generated specifications.
In this article, we contribute to robot geographies by developing the idea of robotic ‘liveliness’ in the context of their increased use during the COVID-19 pandemic. Our framing draws on new materialism, and builds the idea of liveliness by considering robots’ agential capacities in three different ways: as apparently autonomous technologies; as inorganic and mechanical bodies; and as perpetually unfinished and contingent things. We examine a range of examples of their deployment during the pandemic to speculate on the potential for robots to emerge as ‘caring subjects’ via this notion of liveliness, and argue that it offers an approach that can contribute to critiques about their use in ‘caring’ roles, an application which is rapidly developing in the area of social robotics. We contend that this claim to ‘care’ within robotics is one reason why exploration and framing of their liveliness is needed.
Scene understanding algorithms in computer vision are improving dramatically by training deep convolutional neural networks on millions of accurately annotated images. Collecting large-scale datasets for this kind of training is challenging, and the learning algorithms are only as good as the data they train on. Training annotations are often obtained by taking the majority label from independent crowdsourced workers using platforms such as Amazon Mechanical Turk. However, the accuracy of the resulting annotations can vary, with the hardest-to-annotate samples having prohibitively low accuracy. Our insight is that in cases where independent worker annotations are poor more accurate results can be obtained by having workers collaborate. This paper introduces consensus agreement games, a novel method for assigning annotations to images by the agreement of multiple consensuses of small cliques of workers. We demonstrate that this approach reduces error by 37.8% on two different datasets at a cost of 0.17 per annotation. The higher cost is justified because our method does not need to be run on the entire dataset. Ultimately, our method enables us to more accurately annotate images and build more challenging training datasets for learning algorithms.
This paper proposes a positioning algorithm for a semi-autonomous robot in subterranean scenarios. The robot is equipped with positioning sensors, imaging sensors, and sensors to detect hazardous materials. The sensors can be used to automatically generate a site map to increase safety for emergency forces. To create an accurate map, the position and attitude of the robot have to be determined. This is done using an extended Kalman filter which fuses data from LIDAR, wheel odometry, and a MEMS IMU. Tests were carried out in a tunnel in Eisenerz, Austria. To evaluate the achievable accuracy, the estimated position of the filter is compared to a ground truth. The results show that with the developed sensor fusion algorithm, a horizontal positioning error of 1.07% of the traveled distance can be achieved.
The collaborative robot safety mode ‘power and force limiting’ requires the compliance of biomechanical limits to ensure human safety. As part of the risk assessment, it is common to test possible contact points of a collaborative application for a) quasi-static contact (e.g. squeezing or clamping) and b) transient contact (collision with free impact). Although, there are standardized power and force measuring devices (PFMD), which are offered by different companies on the market, multiple, partly differing measuring methods exist. Especially for the transient contact, the respective measuring setup is not consistently defined. Therefore, we carried out an investigation of three state-of-the-art measurement approaches for transient contacts: i) fixed measuring device, ii) linear moveable device on a sledge and iii) device on a pendulum. For a reproducible comparison, we first compared them on an analytical and an experimental perspective. Furthermore, we addressed the specific requirements of cobot applications within flexible working systems. Finally, we analyzed and interpreted the results to derive recommendations for the selection of the measurement setup of the transient contact.