PreprintPDF Available

A SHORT SURVEY OF IMAGE PROCESSING IN LOGISTICS

Authors:
  • Thorsis Technologies GmbH
Preprints and early-stage research may not have been peer reviewed yet.

Abstract

In this article, applications and trends of image processing in logistics will be presented. The term image processing refers to the entirety of systems that capture images and operations that are applied to images and either produce enhanced images or extract data from the images. The information captured by image processing is used to control or monitor logistics processes and thus contributes to the smartness of smart logistics zones. --- Published in: Schenk, Michael (Ed.): 11th International Doctoral Students Workshop on Logistics, June 19, 2018, Magdeburg: Institut für Logistik und Materialflusstechnik an der Otto-von-Guericke-Universität Magdeburg. ----
A preview of the PDF is not available
... In the last years, the digital transformation processes are being used to optimize logistics in factories. In [2] and [3] are discussed how image processing and computer vision techniques contribute to improve the logistics by automating a great variety of operations. For example, security and protection, occupancy of storage, traceability and trackability, inspection for quality control, etc. ...
... Finally, note that we need 10 feature vectors from previous detections to identify a pallet. The tracker results are calculated in terms of IoU as shown in (2) and its performance in FPS. They are shown in Tables VII and VIII. ...
... Camera images from the drone provide the information-rich input needed for the assessment of the delivery environment. Within the scope of computer vision (CV), image processing concerns the manipulation and analysis of images through the tuning of various factors and features of the images [16]. Transformations are applied to an input image, and the resulting output image is returned. ...
Article
Full-text available
The aim of producing self-driving drones has driven many researchers to automate various drone driving functions, such as take-off, navigation, and landing. However, despite the emergence of delivery as one of the most important uses of autonomous drones, there is still no automatic way to verify the safety of the delivery stage. One of the primary steps in the delivery operation is to ensure that the dropping zone is a safe area on arrival and during the dropping process. This paper proposes an image-processing-based classification approach for the delivery drone dropping process at a predefined destination. It employs live streaming via a single onboard camera and Global Positioning System (GPS) information. A two-stage processing procedure is proposed based on image segmentation and classification. Relevant parameters such as camera parameters, light parameters, dropping zone dimensions, and drone height from the ground are taken into account in the classification. The experimental results indicate that the proposed approach provides a fast method with reliable accuracy based on low-order calculations.
... Second, a fast load planning algorithm is needed that uses these measurements to check whether the assets assigned to one delivery route can be feasibly transported together and to propose a load plan for loading the trailer. Sensors and cameras are frequently used in logistics solutions (Borstell et al. 2018, Riestock et al. 2019. Camera-based systems for volume estimation exist and are, for instance, proposed by Borstell et al. (2013) and Kucuk et al. (2019). ...
... Digital transformation processes have, in the last few years, begun to be used to optimise logistics in factories, and more specifically, machine vision, as reviewed by [4]. The work of [5] and [6] discusses how image processing and computer vision techniques contribute to improving logistics by automating a great variety of operations, such as security and protection, occupancy of storage, traceability and trackability or inspection for quality control, etc. ...
Article
Full-text available
The paper industry manufactures corrugated cardboard packaging, which is unassembled and stacked on pallets to be supplied to its customers. Human operators usually classify these pallets according to the physical features of the cardboard packaging. This process can be slow, causing congestion on the production line. To optimise the logistics of this process, we propose a visual recognition and tracking pipeline that monitors the palletised packaging while it is moving inside the factory on roller conveyors. Our pipeline has a two-stage architecture composed of Convolutional Neural Networks, one for oriented pallet detection and recognition, and another with which to track identified pallets. We carried out an extensive study using different methods for the pallet detection and tracking tasks and discovered that the oriented object detection approach was the most suitable. Our proposal recognises and tracks different configurations and visual appearance of palletised packaging, providing statistical data in real time with which to assist human operators in decision-making. We tested the precision-performance of the system at the Smurfit Kappa facilities. Our proposal attained an Average Precision (AP) of 0.93 at 14 Frames Per Second (FPS), losing only 1% of detections. Our system is, therefore, able to optimise and speed up the process of logistic distribution.
... With the use of deep learning based on convolutional neural networks (CNN) [4,5,6], image recognition [7] can use inherent, optically distinguishable properties of the parts for their classification with no external tags required. This makes this technology well suited to monitor processes that involve the direct handling of individual parts. ...
Article
Full-text available
In the automotive industry, the ”completely knocked down” (CKD) business requires the identification and verification of a large number of unmarked parts. This is often a manual activity with a high risk of failure. Deep learning methods offer flexibility and a useful distinction between object classes for the automatic classification. However, these methods require large amounts of labeled training data and are sometimes inaccurate. In comparison, classical algorithms can achieve higher accuracy, but are inflexible and therefore unsuitable. This paper presents an approach to increase process reliability while alleviating the data labeling effort by using synthetic training data from spatially scanned and virtualized parts and combining deep learning and classical algorithms. The result is a flexible, sensor-independent, and autonomously trained classification system.
Article
Full-text available
With cutting-edge technology and data-driven insights, intelligent logistics has become a game-changing supply chain management strategy, helping maximize productivity and streamline processes. The main ideas of smart logistics are outlined in this study. It includes several factors that companies must consider while assessing and implementing intelligent logistics solutions. Real-time visibility, supply chain collaboration, inventory management, robotics and automation, supply chain integration, predictive analytics, scalability and flexibility, security and data privacy, and return on investment are some requirements. Using these criteria, organizations may use intelligent logistics to optimize inventory levels, boost customer happiness, save costs, simplify procedures, and enhance supply chain efficiency. In a dynamic and connected world, intelligent logistics solutions help businesses make data-driven choices, react fast to market needs, and adjust to changing business situations. We applied machine learning algorithms to analyze the supply chain dataset. We used two machine learning algorithms: a decision tree and a random forest. We compute the accuracy, precision, recall, and f1 score. The decision tree has the highest accuracy, with 91%. Then, we used the multi-criteria decision-making (MCDM) methodology to analyze the criteria of intelligent logistics. The AHP method is used to compute the weights of criteria.
Chapter
Digitalization revolutionizes business with the growing influx of technological innovation, possibly fueling the transition toward a more sustainable way of value creation. Apart from hardware-oriented mega-trends like robotics, it is mainly the software-based digital technologies that create fundamental change in processes, operations, functions, and even entire business models. Apart from the sheer introduction of the individual technologies to different application areas of the transportation and logistics sector, a clear picture of the prerequisites and expectable impacts of a holistic digital transformation is still not available though. In this chapter the authors address the research gap with a profound insight into theory and practice of digitalization in the transportation and logistics sector. Moreover, they develop a methodology for a structured evaluation of the digital transformation. The evaluation approach considers economic, ecological, and social dimensions at different levels of planning focusing on the respective requirements and the influences to be gained. With such a structured evaluation approach, researchers and decision-makers from practice are given a tool at hand to consider the extensive effects of digital transformation.
Chapter
Full-text available
Zur Lageeinschätzung auf großflächigen Arealen in Logistik-Hubs, Passagier-Terminals oder Eventflächen werden vermehrt bild- und funkbasierte Sensoren zu komplexen, heterogenen Sensorsystemen integriert. Problematisch ist eine bedarfsrechte Informationsdarstellung für Operatoren mit bestimmten Aufgaben im Rahmen der Prozessleitung bzw. der Prozessüberwachung. Wir stellen ein hybrides Sensorsystem auf Basis von Funk- und Bildsensoren vor, bei welchem die nach außen sichtbare Systemkomplexität durch die Anwendung des Konzepts der Virtuellen Draufsicht verringert wird. Bild- und Funkdaten der einzelnen Sensoren sowie Ergebnisse algorithmischer Berechnungen werden dem Benutzer im Rahmen eines 3D-Visualisierungsmoduls in übersichtlicher Form dargestellt.
Conference Paper
Full-text available
Im vorliegenden Beitrag wird ein System zur prozessintegrierten Volumener-fassung logistischer Palettenstrukturen (palettierte Ladeeinheiten) auf Basis neuartiger Low-Cost-Tiefenbildsensoren mit integrierter Farbkamera vorge-stellt. Das System zeichnet sich durch geringe Stillstandszeiten der Trans-portmittel an den Messstationen (<3s), eine hohe Messgenauigkeit (1-3cm) und einem wartungsarmen Betrieb durch Verzicht auf mechanische Kompo-nenten, wie Linearführungen am Sensor, aus. Diese Systemeigenschaften er-geben sich im Wesentlichen aus den besonderen Eigenschaften der verwen-deten Sensoren, eine flächige Messung in hoher Auflösung (640x480 Pixel) und hoher Frequenz (30Hz) durchführen zu können. Damit eignet sich das Messsystem für den integrierten Einsatz in der Warenkette, bspw. im Wa-reneigang, in der Wareneinlagerung oder im Warenausgang, wodurch eine volumenbasierte Lager-und Transportoptimierung in Echtzeit möglich wird.
Conference Paper
Full-text available
Currently, large areas are continuously monitored by camera networks, whereas an overall situation assessment within a reasonable time is a crucial requirement. In this paper, we propose our Virtual Top View (VTV) approach that provides a clear, concise and direct interpretation of on-field activities in real-time preserving the spatial relationship and, technically, employs planar homography to aggregate the Virtual Top View out of multiple, individual video streams. With an increasing number of cameras or size of the monitored area, the aggregation process slows down. Therefore, we develop acceleration methods (autogenerated warp maps) to achieve a real-time aggregation within large camera networks. Finally, we evaluate the performance and demonstrate our approach in an intra-logistics environment. - - - This is a post-peer-review, pre-copyedit version of an article published in CAIP 2013: Computer Analysis of Images and Patterns. The final authenticated version is available online at: https://doi.org/10.1007/978-3-642-40246-3_68 - - -
Article
Full-text available
In this paper, we describe a multi-camera system for parcel inspection which detects signs of damages and cues of tampering. The proposed system has been developed within the EU project SAFEPOST as a part of a multi-sensor scanning modality, to enhance safety and security of parcels travelling on the European Postal Supply Chain. Our work addresses in particular the safety of valuable goods, whose presence on the postal supply chain is in steady growth. The method we propose is based on extracting 3D shape and appearance information, detecting in real-time signs of damages or tampering, and storing the model for future comparative analysis when required by the system. We provide an experimental evidence of the effectiveness of the method, both in laboratory and field tests.
Article
London City Airport (LCY) is the only London airport actually in London, handling around 70,000 flight movements and 3.65 million passengers in 2014. It is a niche business, in that some 65% of those using LCY are travelling on business and 63% are inbound, having bought their ticket at the other end of the route. Airports don’t have a God-given right to the passengers and airlines they serve. Fifty-nine per cent of airports with four million or fewer passengers in Europe are loss making — in fact, 44% of all airports in Europe are loss making (up 4% in two years). Every airport offers passengers access to air travel — but is this really enough to guarantee survival? Understanding, communicating and delivering on your airport’s passenger proposition is crucial to your success. It is all you have to make you stand out from the crowd. You must protect it at all costs. LCY has developed a passenger proposition based on four pillars — location, network, customer service and, most importantly, speed of transit. It should take no more than 20 minutes to get from front door to departure lounge, and no more than 15 minutes from tarmac to train. We call it the 20:15 promise — and it is a promise that presents an obvious challenge: how can we know if we were delivering?
Conference Paper
The use of sensor data as well as the combination of different data from diverse systems in production and logistics lead to new opportunities for monitoring, controlling and optimization of processes within the scope of Industry 4.0. New developments of camera-based systems support this trend, which is particularly relevant for the control of cyber physical systems (CPS). This paper discusses a new approach for camera-based dynamic identification and localization, including speed and orientation determination, in combination with joined data form different sources and data analysis for CPS. In order to assess the potential of the approach, various possibilities and methods for camera-based optimizing CPS are discussed. A production-oriented logistics application shows the technical feasibility of the approach. KeywordsCameraCamera-based sensorCombining dataLogisticsCyber-Physical SystemsOptimizationVisualizationIndustry 4.0
Chapter
Dieser Beitrag zeigt, wie die von Menschen bekannten Fähigkeiten zur Flexibilität und Anpassung gegenüber veränderten Umgebungsbedingungen, die sich in den kognitiven Eigenschaften der Menschen widerspiegeln, auf Flurförderzeuge in der Intralogistik übertragen werden kann. Als Beispiele für die Umsetzung von Industrie 4.0 in der Intralogistik werden Technologien vorgestellt, die es Flurförderzeugen ermöglichen, ihre Umgebung zu erkennen, Informationen zu kommunizieren, zu schlussfolgern, autonom zu handeln, Entscheidungen zu treffen, zu lernen oder zu planen. Realisiert werden diese Fähigkeiten durch ein optisches Ortungssystem zur Positionsbestimmung, eine kamerabasierte Ein-/Auslagerungsunterstützung und in Reifen integrierte Sensorik sowie neuartige Interaktionsformen für Flurförderzeuge in Form von Sprache und Gestik.
Article
In this contribution a novel system to support order pickers in warehouses is introduced. In contrast to existing solutions it utilizes the tactile perception in order to reduce the systems impact on the visual and auditive senses. Therefore, a smartwatch and a low-cost camera which are both worn by the picker are combined with activity and object recognition methods for surveying the picking process. The activity recognition is used in order to determine whether an object is picked. Then, barcode detection and a CNN (Convolutional Neural Network) based object recognition approach are employed for recognizing whether the correct item is chosen. Beside the conceptional work, implementation details and evaluation results under realistic conditions and on a publicly available dataset are presented.