Preprint

Collaborative Perception in Multi-Robot Systems: Case Studies in Household Cleaning and Warehouse Operations

Authors:
Preprints and early-stage research may not have been peer reviewed yet.
To read the file of this research, you can request a copy directly from the author.

Abstract

This paper explores the paradigm of Collaborative Perception (CP), where multiple robots and sensors in the environment share and integrate sensor data to construct a comprehensive representation of the surroundings. By aggregating data from various sensors and utilizing advanced algorithms, the collaborative perception framework improves task efficiency, coverage, and safety. Two case studies are presented to showcase the benefits of collaborative perception in multi-robot systems. The first case study illustrates the benefits and advantages of using CP for the task of household cleaning with a team of cleaning robots. The second case study performs a comparative analysis of the performance of CP versus Standalone Perception (SP) for Autonomous Mobile Robots operating in a warehouse environment. The case studies validate the effectiveness of CP in enhancing multi-robot coordination, task completion, and overall system performance and its potential to impact operations in other applications as well. Future investigations will focus on optimizing the framework and validating its performance through empirical testing.

No file available

Request Full-text Paper PDF

To read the file of this research,
you can request a copy directly from the author.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Modern neural networks have made great strides in recognising objects in images and are widely used in defect detection. However, the output of a neural network strongly depends on both the training dataset and the conditions under which the image was acquired for analysis. We have developed a software–hardware method for evaluating the effect of variable lighting on the results of defect recognition using a neural network model. The proposed approach allows us to analyse the recognition results of an existing neural network model and identify the optimal range of illumination at which the desired defects are recognised most consistently. For this purpose, we analysed the variability in quantitative parameters (area and orientation) of damage obtained at different degrees of illumination for two different light sources: LED and conventional incandescent lamps. We calculated each image’s average illuminance and quantitative parameters of recognised defects. Each set of parameters represents the results of defect recognition for a particular illuminance level of a given light source. The proposed approach allows the results obtained using different light sources and illumination levels to be compared and the optimal source type/illuminance level to be figured out. This makes implementing a defect detection environment that allows the best recognition accuracy and the most controlled product quality possible. An analysis of a steel sheet surface showed that the best recognition result was achieved at an illuminance of ~200 lx. An illuminance of less than ~150 lx does not allow most defects to be recognised, whereas an illuminance larger than ~250 lx increases the number of small objects that are falsely recognised as defects.
Article
Full-text available
In urban intersections, the sensory capabilities of autonomous vehicles (AVs) are often hindered by visual obstructions, posing significant challenges to their robust and safe operation. This paper presents an implementation study focused on enhancing the safety and robustness of Connected Automated Vehicles (CAVs) in scenarios with occluded visibility at urban intersections. A novel LiDAR Infrastructure System is established for roadside sensing, combined with Baidu Apollo’s Automated Driving System (ADS) and Cohda Wireless V2X communication hardware, and an integrated platform is established for roadside perception enhancement in autonomous driving. The field tests were conducted at the Singapore CETRAN (Centre of Excellence for Testing & Research of Autonomous Vehicles—NTU) autonomous vehicle test track, with the communication protocol adhering to SAE J2735 V2X communication standards. Communication latency and packet delivery ratio were analyzed as the evaluation metrics. The test results showed that the system can help CAV detect obstacles in advance under urban occluded scenarios.
Article
Full-text available
Robot skin, as the physical interface between collaborative robots (cobots) and the external environments, could mediate on-body tactile signals for cobots to respond timely once a collision occurs during tasks and thus avoid damages to both cobots and surroundings. To achieve effective on-body tactile perception, robot skin should possess adequate conformal adaptability to be covered on various complex contours on the cobot, such as joints, while maintaining accurate and rapid sensing performance. In this letter, a soft robot skin consisting of an array of force-sensing units and a substrate with a serpentine structure was developed, both of which were made of porous material. The serpentine structure endows the developed skin with stretchable attributes. Thus, the skin could be conformally attached to complex surfaces of a cobot with less local stress, which was validated by finite element analysis. Taking the advantage of porous microstructure, the sensing units embedded in the substrate have exhibited high sensitivity and fast response time under external force, which are essential factors in real-time collision detection. Finally, experimental validation was carried out to verify the feasibility of deploying the developed soft robot skin on a robot arm to provide on-body tactile perception for a cobot.
Article
Full-text available
Cleaning multi-storey buildings need to be considered while developing autonomous service robots. In this paper, we introduce a novel reconfigurable platform called sTetro with the abilitiesto navigate on the floor as well as to detect then climb the staircase autonomously. To this end, an operational framework for this cleaning robot that leverages on customized deep convolution neural network (DCNN) and the RGBD camera to locate staircases in the 3D prebuilt map and then to plan trajectories by maximizing area coverage for both floor and staircase in the multi-storey environments is proposed. While building a 3D map, the staircase location is identified at the 3D point close to the center of the staircase first step using a contour detection algorithm from the boundary of the detected staircase by DCNN. The robot follows the planned trajectory to clear the floor then approaching the staircase location accurately to execute the climbing mode while cleaning the staircase to reach the next floor. The proposed methods archive the high accuracy in identifying the presence of the different staircase types, and the first step locations. Moreover, the multi-storey building evaluations have demonstrated the efficiency of the sTetro in terms of the area coverage both staircase and floor free space.
Article
Full-text available
This paper presents a novel garbage pickup robot which operates on the grass. The robot is able to detect the garbage accurately and autonomously by using a deep neural network for garbage recognition. In addition, with the ground segmentation using a deep neural network, a novel navigation strategy is proposed to guide the robot to move around. With the garbage recognition and automatic navigation functions, the robot can clean garbage on the ground in places like parks or schools efficiently and autonomously. Experimental results show that the garbage recognition accuracy can reach as high as 95%, and even without path planning, the navigation strategy can reach almost the same cleaning efficiency with traditional methods. Thus, the proposed robot can serve as a good assistance to relieve dustman’s physical labor on garbage cleaning tasks. IEEE
Article
Collaborative perception is essential to address occlusion and sensor failure issues in autonomous driving. In recent years, theoretical and experimental investigations of novel works for collaborative perception have increased tremendously. So far, however, few reviews have focused on systematical collaboration modules and large-scale collaborative perception datasets. This article reviews recent achievements in this field to bridge this gap and motivate future research. We start with a brief overview of collaboration schemes. After that, we systematically summarize the collaborative perception methods for ideal scenarios and real-world issues. The former focuses on collaboration modules and efficiency, and the latter is devoted to addressing the problems in actual application. Furthermore, we present large-scale public datasets and summarize quantitative results on these benchmarks. Finally, we highlight gaps and overlooked challenges between current academic research and real-world applications.
Article
Occlusion is a critical problem in the Autonomous Driving System. Solving this problem requires robust collaboration among autonomous vehicles traveling on the same roads. However, transferring the entirety of raw sensors' data among autonomous vehicles is expensive and can cause a delay in communication. This paper proposes a method called Realtime Collaborative Vehicular Communication based on Bird's-Eye-View (BEV) map. The BEV map holds the accurate depth information from the point cloud image while its 2D representation enables the method to use a novel and well-trained image-based backbone network. Most importantly, we encode the object detection results into the BEV representation to reduce the volume of data transmission and make real-time collaboration between autonomous vehicles possible. The output of this process, the BEV map, can also be used as direct input to most route planning modules. Numerical results show that this novel method can increase the accuracy of object detection by cross-verifying the results from multiple points of view. Thus, in the process, this new method also reduces the object detection challenges that stem from occlusion and partial occlusion. Additionally, different from many existing methods, this new method significantly reduces the data needed for transfer between vehicles, achieving a speed of 21.92 Hz for both the object detection process and the data transmission process, which is sufficiently fast for a real-time system.
Article
Collaborative robotics is an umbrella term that vehicles the general idea of proximity of machines to humans for some useful task, not necessarily continuously/synchronously in a shared spaces. Useful tasks implicate benefits for workload of factory operators (e.g. cognitive and physical ergonomics) or for quality and productivity (e.g. more accessible layouts, troubleshooting, frequent process observation, etc). Current state of the art of collaborative robotics in industry is the result of a long legacy of research and development in control theory, major effort in actuation and mechanisms design in 2000s resulting in commercial outcomes, and abundant literature about improving control performances and intuitive interaction modes. Safety technology and standardization is reported as a coupled subject with respect to machinery design and equipment, since safe collaborative robotics aims at identifying hazards and estimating risks of applications, and reducing such risks by a combination of different strategies and options available for different classes of robot systems.
Article
Recently, significant gains have been made in our understanding of multi-robot systems, and such systems have been deployed in domains as diverse as precision agriculture, flexible manufacturing, environmental monitoring, search-and-rescue operations, and even swarming robotic toys. What has enabled these developments is a combination of technological advances in performance, price, and scale of the platforms themselves, and a new understanding of how the robots should be organized algorithmically. In this paper, we focus on the latter of these advances, with particular emphasis on decentralized control and coordination strategies as they pertain to multi-robot systems. The paper discusses a class of problems related to the assembly of preferable geometric shapes in a decentralized manner through the formulation of descent-based algorithms defined with respect to team-level performance costs.
Conference Paper
In cooperative perception systems, different vehicles share object data obtained by their local environment perception sensors, like radar or lidar, via wireless communication. In this paper, this so-called Car2X-based perception is modeled as a virtual sensor in order to integrate it into a highlevel sensor data fusion architecture. The spatial and temporal alignment of incoming data is a major issue in cooperative perception systems. Temporal alignment is done by predicting the received object data with a model-based approach. In this context, the CTRA (constant turn rate and acceleration) motion model is used for a three-dimensional prediction of the communication partner's motion. Concerning the spatial alignment, two approaches to transform the received data, including the uncertainties, into the receiving vehicle's local coordinate frame are compared. The approach using an unscented transformation is shown to be superior to the approach by linearizing the transformation function. Experimental results prove the accuracy and consistency of the virtual sensor's output.
Conference Paper
In real world planning problems, time for deliberation is often limited. Anytime planners are well suited for these problems: they find a feasi- ble solution quickly and then continually work on improving it until time runs out. In this paper we propose an anytime heuristic search, ARA*, which tunes its performance bound based on available search time. It starts by finding a suboptimal solution quickly using a loose bound, then tightens the bound progressively as time allows. Given enough time it finds a provably optimal solution. While improving its bound, ARA* reuses previous search efforts and, as a result, is significantly more effi- cient than other anytime search methods. In addition to our theoretical analysis, we demonstrate the practical utility of ARA* with experiments on a simulated robot kinematic arm and a dynamic path planning prob- lem for an outdoor rover.
Towards vehicle-to-everything autonomous driving: A survey on collaborative perception
  • S Liu
  • C Gao
  • Y Chen
  • X Peng
  • X Kong
  • K Wang
  • R Xu
  • W Jiang
  • H Xiang
  • J Ma
S. Liu, C. Gao, Y. Chen, X. Peng, X. Kong, K. Wang, R. Xu, W. Jiang, H. Xiang, J. Ma et al., "Towards vehicle-to-everything autonomous driving: A survey on collaborative perception," arXiv preprint arXiv:2308.16714, 2023.
V2vnet: Vehicle-to-vehicle communication for joint perception and prediction
  • T.-H Wang
  • S Manivasagam
  • M Liang
  • B Yang
  • W Zeng
  • R Urtasun
T.-H. Wang, S. Manivasagam, M. Liang, B. Yang, W. Zeng, and R. Urtasun, "V2vnet: Vehicle-to-vehicle communication for joint perception and prediction," in Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part II 16. Springer, 2020, pp. 605-621.
V2x cooperative perception for autonomous driving: Recent advances and challenges
  • T Huang
  • J Liu
  • X Zhou
  • D C Nguyen
  • M R Azghadi
  • Y Xia
  • Q.-L Han
  • S Sun
T. Huang, J. Liu, X. Zhou, D. C. Nguyen, M. R. Azghadi, Y. Xia, Q.-L. Han, and S. Sun, "V2x cooperative perception for autonomous driving: Recent advances and challenges," arXiv preprint arXiv:2310.03525, 2023.
Open Source Robotics Foundation
  • Wiki
Open Source Robotics Foundation, "costmap 2d -ROS Wiki," https: //wiki.ros.org/costmap 2d, 2024, [Accessed: July 28, 2024].