Conference Paper

Framework for Automated Robotic Arm Manipulation in Variable Industrial Environments

Authors:
Conference Paper

Framework for Automated Robotic Arm Manipulation in Variable Industrial Environments

If you want to read the PDF, try requesting it from the authors.

Abstract

In this paper, we present a generalized, holistic method for automated robotic arm handling of manufactured components in an industrial setting using computer vision. In particular, we address scenarios in which a high volume of manufactured parts are moving along a conveyor belt at random locations and orientations with multiple robotic arms available for manipulation. We also present specific, tested solutions to all stages of the framework as well as some alternative methods based on the literature review. The framework consists of three stages: (1) visual data capture, (2) data interpretation, and (3) command generation and output to robotic arms. In the visual data capture stage, a multi-component computer vision system takes in a live camera feed and exports it to an external processor. In the data interpretation stage, this video feed is interpreted using tools like 3D point clouds and object detection/tracking models to provide useful information such as object number, location, velocity, and orientation. Lastly, the command generation and output to the robotic arms stage takes the information acquired from the analysis in the data interpretation stage and turns it into instructions for robot control. While a full-scale, cohesive system has yet to be tested, our solutions to each stage show the feasibility of implementing such a system in an industrial setting.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
ResearchGate has not been able to resolve any references for this publication.