Fig 6 - uploaded by Jonas Stenzel
Content may be subject to copyright.
Source publication
In this chapter, novel approaches for the detection of logistical objects (loading units) in the field of material flow applications are comparative presented, focusing on solutions using low cost 3D sensors. These approaches realize substantial changes in comparison to traditional system design of logistic processes. Complex 3D-vision systems, cos...
Contexts in source publication
Context 1
... key idea of the concept is to generate captures of the loading scene from different views with the depth camera and reconstruct a detailed 3D map of the scene. Afterwards single quad like loading units are detected with regard to an automatic handling by the robot. The pipeline of the novel load detection pipeline is illustrated in Fig. 6. An essential step is the system calibration. As the robot base coordinate system serves as the world coordinate system of the reconstructed 3D map of the environment, a relationship of the camera coordinate system and the robot coordinate system must be established in advance, known as hand-eye-calibration (s. Sect. 3.2). In the ...
Similar publications
With the advancement in the digital camera technology, the use of high resolution images and videos has been widespread in the modern society. In particular, image and video frame registration is frequently applied in computer graphics and film production. However, the conventional registration approaches usually require long computational time for...
Photo Response Non-Uniformity (PRNU) based source camera attribution is an effective method to determine the origin camera of visual media (an image or a video). However, given that modern devices, especially smartphones, capture images, and videos at different resolutions using the same sensor array, PRNU attribution can become ineffective as the...
Whereas modern digital cameras use a pixelated detector array to capture images, single-pixel imaging reconstructs images by sampling a scene with a series of masks and associating the knowledge of these masks with the corresponding intensity measured with a single-pixel detector. Though not performing as well as digital cameras in conventional vis...
Citations
... The research domain of object activity recognition is significantly smaller than HAR. For pallets, research tends to focus on tracking in terms of temperature deviations or location [15,16,17,18]. In [19] the authors work on sensor-based pallets and develop a system to monitor humidity and temperature in the surrounding environment to make assumptions about the state of the transported food. ...
Pallets are one of the most important load carriers for international supply chains. Yet, continuously tracking activities such as Driving, Lifting or Standing along their life cycle is hardly possible. This contribution is the first to propose a taxonomy for sensor-based activity recognition of pallets. Different types of acceleration sensors are deployed in three logistical scenarios for creating a benchmark dataset. A random forest classifier is deployed for supervised learning. The results demonstrate that automated, sensor-based life cycle assessment based on the proposed taxonomy is feasible. All data and corresponding videos are published in the SPARL dataset [1].
... In 2015, Prasse et al. [Pra+15] used a robot arm and low-cost 3D sensors to perform the task of depalletization. They use two approaches. ...
... The atomic loading units are assumed to have a cuboid shape and are determined by employing a Random Sampling Consensus (RANSAC) [FB87] algorithm. Prasse et al. [Pra+15] evaluate the PMD approach on real data, and report the deviation of package dimensions for 9 parcels. Since the deviation in height is less than their threshold of 10 mm, they argue that approach is suitable for application in logistics. ...
Computer vision applications in transportation logistics and warehousing have a huge potential for process automation. We present a structured literature review on research in the field to help leverage this potential. All literature is categorized w.r.t. the application, i.e. the task it tackles and w.r.t. the computer vision techniques that are used. Regarding applications, we subdivide the literature in two areas: Monitoring, i.e. observing and retrieving relevant information from the environment, and manipulation, where approaches are used to analyze and interact with the environment. In addition to that, we point out directions for future research and link to recent developments in computer vision that are suitable for application in logistics. Finally, we present an overview of existing datasets and industrial solutions. We conclude that while already many research areas have been investigated, there is still huge potential for future research. The results of our analysis are also available online at https://a-nau.github.io/cv-in-logistics.
... Die Vielfalt der G€ uter und die Varianzen an Packmustern der Paletten erfordern neuartige Sensoren, die das Packmuster zum Palettieren und Depalettieren kontrollieren oder einzelne G€ uter in Sammelbehältern detektieren und deren Lage bestimmen. Bildgebende Sensoren wie hochauflösende 2D-Kameras oder verschiedenartige 3D-Messsysteme sind hier im Fokus der Forschung und Entwicklung (Prasse et al. 2015). Der Personenschutz f€ ur die Roboterzellen wird oft mit zertifizierten Sicherheitslichtgittern oder Sicherheitslaserscanner realisiert. ...
Wenn bereits die Sensorik durch komplexe Datenverarbeitung und detailliertes Applikationswissen im Sensor zahlreiche Funktionen zur Lösung der Kundenanforderungen bereitstellt sowie weitere integrierte Funktionen eine automatische Adaption und Optimierung der Funktion ermöglichen, spricht man von Intelligenter Sensorik. Intelligente Sensorik ist notwendig, um die erforderliche Autonomie von Maschinen und Anlagen im Wandel zu cyber-physischen Systemen (CPS) im Kontext von Industrie 4.0 zu erreichen und gleichzeitig das Engineering von CPS zu vereinfachen. Im Fokus dieses Beitrags ist die Diskussion über die Eigenschaften eines Intelligenten Sensors und dessen Nutzung in exemplarischen logistischen Anwendungen, die durch den Wandel hin zu CPS profitieren können. Intelligente Sensorik lässt Zukunftsszenarien wie die Smart Factory erst realisierbar erscheinen.
We focus on enabling damage and tampering detection in logistics and tackle the problem of 3D shape reconstruction of potentially damaged parcels. As input we utilize single RGB images, which corresponds to use-cases where only simple handheld devices are available, e.g. for postmen during delivery or clients on delivery. We present a novel synthetic dataset, named Parcel3D, that is based on the Google Scanned Objects (GSO) dataset and consists of more than 13,000 images of parcels with full 3D annotations. The dataset contains intact, i.e. cuboid-shaped, parcels and damaged parcels, which were generated in simulations. We work towards detecting mishandling of parcels by presenting a novel architecture called CubeRefine R-CNN, which combines estimating a 3D bounding box with an iterative mesh refinement. We benchmark our approach on Parcel3D and an existing dataset of cuboid-shaped parcels in real-world scenarios. Our results show, that while training on Parcel3D enables transfer to the real world, enabling reliable deployment in real-world scenarios is still challenging. CubeRefine R-CNN yields competitive performance in terms of Mesh AP and is the only model that directly enables deformation assessment by 3D mesh comparison and tampering detection by comparing viewpoint invariant parcel side surface representations. Dataset and code are available at https://a-nau.github.io/parcel3d.
This paper presents a mobile manipulation platform designed for autonomous depalletizing tasks. The proposed solution integrates machine vision, control and mechanical components to increase flexibility and ease of deployment in industrial environments such as warehouses. A collaborative robot mounted on a mobile base is proposed, equipped with a simple manipulation tool and a 3D in-hand vision system that detects parcel boxes on a pallet, and that pulls them one by one on the mobile base for transportation. The robot setup allows to avoid the cumbersome implementation of pick-and-place operations, since it does not require lifting the boxes. The 3D vision system is used to provide an initial estimation of the pose of the boxes on the top layer of the pallet, and to accurately detect the separation between the boxes for manipulation. Force measurement provided by the robot together with admittance control are exploited to verify the correct execution of the manipulation task. The proposed system was implemented and tested in a simplified laboratory scenario and the results of experimental trials are reported.
Wenn bereits die Sensorik durch komplexe Datenverarbeitung und detailliertes Applikationswissen im Sensor zahlreiche Funktionen zur Lösung der Kundenanforderungen bereitstellt sowie weitere integrierte Funktionen eine automatische Adaption und Optimierung der Funktion ermöglichen, spricht man von Intelligenter Sensorik. Intelligente Sensorik ist notwendig, um die erforderliche Autonomie von Maschinen und Anlagen im Wandel zu cyber-physischen Systemen (CPS) im Kontext von Industrie 4.0 zu erreichen und gleichzeitig das Engineering von CPS zu vereinfachen. Im Fokus dieses Beitrags ist die Diskussion über die Eigenschaften eines Intelligenten Sensors und dessen Nutzung in exemplarischen logistischen Anwendungen, die durch den Wandel hin zu CPS profitieren können. Intelligente Sensorik lässt Zukunftsszenarien wie die Smart Factory erst realisierbar erscheinen.
This work addresses the task of robot depalletizing by means of a mobile manipulator, taking into account the problem of localizing the boxes to be removed from the pallet and a manipulation strategy that allows to pull the boxes without lifting them with the robot arm. The depalletizing task is of particular interest in the industrial scenario in order to increase efficiency, flexibility and economic affordability of automatic warehouses. The proposed solution makes use of a multi-sensor vision system and a force-controlled collaborative robot in order to detect the boxes on the pallet and to control the robot interaction with the boxes to be removed. The vision system comprises a fixed 3D Time-of-flight camera and an eye-in-hand 2D camera. Preliminary experimental results performed on a laboratory setup with a fixed-based robotic manipulator are reported to show the effectiveness of the perception and control system.
The need for automated distribution systems has recently grown due to labor shortages and an increasing preference among consumers to shop online. The process of depalletizing roll box pallets (RBPs) requires skillful work in a limited space, but an automated RBP depalletizing system with a speed equal to or exceeding that of human workers has yet to be developed. We solved this issue using two technologies. One is a system design and the other is an algorithm for prioritizing the packages to be removed from the stack. We designed a highspeed depalletizing system that can be installed in a narrow space at the distribution site. Equipped with a compact linear actuator mechanism, the system utilizes an end effector that can handle multiple packages simultaneously. Moreover, by optimizing the process of transferring packages on the system, we increased the speed. Our algorithm for prioritizing packages allows the proper selection of packages held simultaneously for faster depalletizing. However, if the height of complexly stacked packages is not known, depalletizing is difficult. To solve this issue, we developed an algorithm with special rules. Finally, we conducted depalletizing trials with the latest prototype system using different arrangements of stacked packages, and achieved an average speed of 683 packages per hour.