PreprintPDF Available

Computer Vision Application in Industrial Automation

Authors:
Preprints and early-stage research may not have been peer reviewed yet.

Abstract and Figures

Industry has huge demand for productivity improvement with the implementation of computer control automation. Modern image processing technique, image recognition and analysis in term of inspection, verification and automated process based on computer vision has advanced the step of automation industrial process. Industrial environment is very much favorable to vision programming. In the factory vision based industrial robot change the traditional way of mechanical assemble, quality control of product and rapid manufacturing. Hence numerous application and infinite solution for application in manufacturing industry makes image recognition a motivating field of study. However the scope of this field is very versatile and important so this paper represent the state of the art in computer vision techniques for the industrial advancement and practical applicability.
Content may be subject to copyright.
Computer Vision Application in Industrial
Automation
Debashish Roy
Letterkenny Institute of Technology
Port Road, Letterkenny
Co. Donegal, Ireland
L00150522@student.lyit.ie
droy@gmx.us
Abstract Industry has huge demand for productivity
improvement with the implementation of computer control
automation. Modern image processing technique, image
recognition and analysis in term of inspection, verification and
automated process based on computer vision has advanced the
step of automation industrial process. Industrial environment is
very much favorable to vision programming. In the factory vision
based industrial robot change the traditional way of mechanical
assemble, quality control of product and rapid manufacturing.
Hence numerous application and infinite solution for application
in manufacturing industry makes image recognition a motivating
field of study. However the scope of this field is very versatile and
important so this paper represent the state of the art in computer
vision techniques for the industrial advancement and practical
applicability.
Keywords—Computer Vision, Industrial Automation, Vision
robot, Vision based industry, Robotic Vision, Automation, Vision.
I. INTRODUCTION (HEADING 1)
It is very necessary to empower vision system agent with
the ability to become autonomous which can detect, identify,
pick and place parts independently or with the collaboration of
human operator. Present industry is well connected modern
manufacturing unit equipped with Internet of things which
means the application of Vision system and robotic automation
is feasible.(Tekleyohannes et al. 2017)
Robotic vision system, template matching and recognition
system are the area of inspection system in manufacturing
industry. Computer vision has important role in the area of
unmanned production in manufacturing unit. Visual quality
control and automation application is suggested by camera
covered over the assemble line in modern factory. Product is
recognized in an image recognition system then classify ‘Okay’
or “not okay” using deep learning technique in quality
control.(Ozdemir and Koc 2019)
To achieve this goal, such system employs a number of
machine vision technique. (Wilson 2018).
II. BACKGROUND AND RELATED WORK
A. Human-Vision based Robot Interaction:
Human-robot interaction is modern research area, the
application of robotic interaction with the capability of vision
system provide new dimension of possibilities in the context of
manufacturing.(Guhl et al. 2017). Focusing on gesture
modality and image processing capability, an visual input
directly process and trigger by industrial robot for an action.
Most industrial robotic system are concentrate on the tasks and
goals for the specific situation.
The robotic program research focus on the interface to a
tele laboratory which design and develop for the user to enable
several feature to robot like cameras, control framework which
also include various representation method like augmented and
virtual reality. When the application bring to industrial
environment then human-robot interaction can be a challenge
which is now a days pretty well manageable with the
advancement of modern vision system.
In manufacturing unit vision system provide capability for
robot to avoid collisions. Many modern approach automatically
determine industrial workspace accessibility by multiple vision
enable agent.(Guhl et al. 2017)
B. Modern 3D Methods for Automation
It is required to make the industrial robots capable to
become autonomous. Depend on specific application, the
vision system may be scene-related or object related. Cameras,
lesser, sensors attached to agent allowing to captur image about
task where 3D imaging system gives additional capability to be
aware of the surroundings whether any object is to pick or
human is blocking the path.(Wilson 2018)
The object related task like pick-and-place function
incorporate method to localize and map their function in
manufacturing unites. In short the 3D vision system add an
advance capabilities for industrial robot to automate more
accurately using different peripheral and software.
C. Open Platform Communictions Vision (OPC):
OPC architecture is secure and reliable communication
protocol for manufacturer where smallest sensor up to the
enterprise IT level and cloud.(Cassel 2018). OPC Vision for
image processing designed and focus factory floor and general
industrial platform. The suggested objective is to integrate any
item from image processing components into industrial
automation applications for the hope of building capable of
machine vision technologies to interact entire factory. User
level image processing system is semantic description of image
data.(Cassel 2018)
D. Decentralized Vision based communication:
Production inspection task image processing systems
communicate with programmable logic controllers (PLCs).
(Cassel 2018). In this method, it communicates pass/fail result
to the PLC after analyses image. Communication method are
standardized in this OPC Vision.
An ERP system is very much capable to determine frame
grabber properties or image processing system streams could
be retrieved by clients via events.(Cassel 2018)
E. Vision Enable Robotic Operating System (ROS) :
Industrial automation application needed data and code to
assign the task in the cloud which is a vast solution of
middleware used in Robotic Operating System (ROS) to
maintain communication in cloud enable robot. (Guhl et al.
2017)
Cloud based Robotic framework is tailored towards high
bandwidth robotic application in industry which allows out
sourcing of vision enable task.
III. ARCHITECTURE & METHOD OF COMPUTER VISION
This paper focus on the development of computer vision
system for industrial production unit to automate task with or
without human intervention. The RGB camera mounted on a
robotic machine with combination of sensor allow robot to
navigate independently based on perceive environment in safe
way. The camera captures image of the surface with obstacle at
frame per second then process at the computer using OpenCV
programming library to decide the direction of turn and
generate obstacle free way for robot to work.(Kumar et al.
2018)
A. Path planning architecture using RGB Camera:
Image captured by the camera processed using OpenCV
library which is the most common and best computer vision
tools. Image pass through RGB (Red Green Blue) channel
color which is suitable for OpenCV library.
The image splits into upper and lower half, lower portion
set as the region of interest (ROT) so that robot does not see
except the floor for movement inside the manufacturing unit.
The ROI is currently in RGB color which is heard to separate
obstacle from floor. The color image is converted to a
grayscale. Grayscale image is binary image where the floor is
prominent to detect, the pixels containing the color of the floor
will turn white which is 255 value in 8-bit image and rest is
black color with 0 value.
Fig:1.
Source: (Kumar et al. 2018)
A histogram is plotted with black and white pixels. In
warehouse robot moves to that side of the ROI to avoid the
obstacle.
Fig: 2.
Source: (Kumar et al. 2018)
Figure 2 shows avoiding a deadlock in a warhorse
movement situation by using RGB camera.
B. Vision based Robotic system:
Autonomous pick and place robotic with vision powered is
a flowchart operation. The picking system in industry is a
programmable robot arm with various computer vision
algorithms consist of object recognition and
localization.(Huang and Mok 2018) The common task in
industrial use is to pick and place in house with the help of
vision system agent.
C. Architecture of the Pick and place system:
A vision system implements machine learning and vision
algorithms, The robot control framework is based on Robotic
Operating System (ROS) with two type of camera system and
depth information system to generate 3D point.
Fig:3
Source: (Huang and Mok 2018)
ROS is a open source middleware framework for industrial
robotic automation.
IV. VISION BASED ROBOTIC MANU FACTURING
Industrial Engineering automation supported by robot is a
advanced intelligent smart factory. The advancement of
machine vision system to evaluate of welding process,
inspection to detect fault, visual sensor data acquisition are
common task done by automated work flow where very
minimum human intervention is required. The aerospace
manufacturing industry is a prominent example of an industry
4.0.(French et al. 2017)
The pick and place operation is most common task for
industrial automation used by computer vision. Shape
detection, dimension calculation is possible either embedded
code or pattern recognition.
Fig:3.
Source: (French et al. 2017)
Machine vision system enabling robotic hand to detect, identify
and monitoring for task.
A. Vision system for welding:
Machine vision system is capable enough to investigate for
pre-weld evaluation then welding parameter are calculated and
set up welding system.(Sergeyev et al. 2017)
The vision system inspects the welding parameters then
guideline is created for robotic arm to performs the operations
for the welding in specific area.(French et al. 2017)
B. Object Recongnition for Robotic Automation:
Robotic automation for the purpose of object detection system
are feature matching. SIFT (Scale Invariant feature transform)
is a method for local descriptor extractor and templet matching.
ROS is programmed for recognize the object with the
attachment of camera.(Xu et al. 2016)
V. ALGORITHM AND METHOD FOR OBJECT DETECTION.
A. SIFT(Scale Invariant feature transform):
Machine vision system SIFT has important properties such as
invariance to translation scaling and rotation.(Xu et al. 2016).
Template detection is the most common task in manufacturing
region. In this method, selected templet of a object is detected
in the given image.
Fig:4
Source: (Xu et al. 2016)
SIFT is also used to find most closest image templet and locate
the object.
B. Camshift and Meanshift Algorithm :
Camshift is nothing but meanshift where it is a gradient fuction
estimation method. The camshift process histogram to color
probability distribution map of a color image. It is very useful
and efficient in rough position calibration.
C. CNN for image classification :
CNN approach enable system for pattern recognition in
industrial environment where CNN model with help of training
image data are autonomously separate and identify specific
pattern and area in image. (Ullah and Mehmood 2019)
D. SVM for image classification :
SVM is linear classifier use to predict class and SVM model
predict and map points and clear separation gap. This method
separate two different classes. (Soans et al. 2018)
VI. VISION DATA ACQUISION TECHNIQUE:
Industrial camera collect the coordinates and distance
values and direction of the robot coordinate system. When any
robot moves direction into a visual field, camera pixel are same
but camera collect coordinate of the points towards direction
and converting coordinate from 2D to 3D show the location
change (Luo and Wang 2018)
A. Stereo Vision Syatem :
Stereo camera with proper aligned lens that axes and image
planes are attached parallel with computer system measures
distances between objects. In this method one camera is used to
measures reference and the other one camera works as side
camera in system. A stereo matching algorithm matches pattern
in image and compare difference in location with another
camera. Depth value refer disparity.(DANDIL and ÇEVİK
2019). This method is data acquisition method in 3D by multi
2D views, the x, y, z coordinates in Figure 5.
Fig: 4
Source: (DANDIL and ÇEVİK 2019)
B. Edge-Based Disparcity Map:
Edge based disparity map algorithm works enhance the
reliability of the result which achieved finding of same point in
a object block. Edge is important feature as it carries
information of object which is extracted from rectified image.
Object boundary and surface is identified with this method.
To enhance the shape of the object by spreading boundary
obtained by Morphology filtering operation is applicable in
edge detected image. The Canny edge detection is another
method works in edge detection which is mainly applicable to
extract high quality edge from picture, it is also helpful to
detect true weak edge.(Du and Okae 2017)
CONCLUSION
This paper describes the real-world application of vision
based autonomous agent system for an industrial work.
However, Object detection, pose estimation, 3D model
reconstruction area the major component of the system which
also includes trajectory optimization. The automation agent
functionality area consist of various library includes vision
algorithm. The real performance testing of the system makes
the automation accountable in this paper.
The human-machine interface in manufacturing unit based
on vision-based input enable industry is semi or fully
automated.
REFERENCES
Cassel, M. (2018) ‘“OPC Vision” release candidate presented:
Image processing to become integral component of
industrial automation’, Vision Systems Design,
23(11), 12–14.
DANDIL, E., ÇEVİK, K.K. (2019) ‘Computer Vision Based
Distance Measurement System using Stereo Camera
View’, in 2019 3rd International Symposium on
Multidisciplinary Studies and Innovative
Technologies (ISMSIT), Presented at the 2019 3rd
International Symposium on Multidisciplinary
Studies and Innovative Technologies (ISMSIT), 1–4.
Du, J., Okae, J. (2017) ‘Optimization of stereo vision depth
estimation using edge-based disparity map’, in 2017
10th International Conference on Electrical and
Electronics Engineering (ELECO), Presented at the
2017 10th International Conference on Electrical and
Electronics Engineering (ELECO), 1171–1175.
French, R., Benakis, M., Marin-Reyes, H. (2017) ‘Intelligent
sensing for robotic re-manufacturing in aerospace —
An industry 4.0 design based prototype’, in 2017
IEEE International Symposium on Robotics and
Intelligent Sensors (IRIS), Presented at the 2017
IEEE International Symposium on Robotics and
Intelligent Sensors (IRIS), 272–277.
Guhl, J., Tung, S., Kruger, J. (2017) ‘Concept and architecture
for programming industrial robots using augmented
reality with mobile devices like microsoft HoloLens’,
in 2017 22nd IEEE International Conference on
Emerging Technologies and Factory Automation
(ETFA), Presented at the 2017 22nd IEEE
International Conference on Emerging Technologies
and Factory Automation (ETFA), 1–4.
Huang, P.-C., Mok, A.K. (2018) ‘A Case Study of Cyber-
Physical System Design: Autonomous Pick-and-
Place Robot’, in 2018 IEEE 24th International
Conference on Embedded and Real-Time Computing
Systems and Applications (RTCSA), Presented at the
2018 IEEE 24th International Conference on
Embedded and Real-Time Computing Systems and
Applications (RTCSA), 22–31.
Kumar, P.B., Parhi, D.R., Sethy, M., Chhotray, A., Kant
Pandey, K., Sahu, C. (2018) ‘Humanoid Navigation:
An Intelligent Computer Vision Based Approach’, in
2018 International Electrical Engineering Congress
(IEECON), Presented at the 2018 International
Electrical Engineering Congress (iEECON), 1–4.
Luo, R.C., Wang, H. (2018) ‘Automated Tool Coordinate
Calibration System of an Industrial Robot’, in 2018
IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS), Presented at the 2018
IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS), 5592–5597.
Ozdemir, R., Koc, M. (2019) ‘A Quality Control Application
on a Smart Factory Prototype Using Deep Learning
Methods’, in 2019 IEEE 14th International
Conference on Computer Sciences and Information
Technologies (CSIT), Presented at the 2019 IEEE
14th International Conference on Computer Sciences
and Information Technologies (CSIT), 46–49.
Sergeyev, A., Alaraje, N., Parmar, S., Kuhl, S., Druschke, V.,
Hooker, J. (2017) ‘Promoting industrial robotics
education by curriculum, robotic simulation software,
and advanced robotic workcell development and
implementation’, in 2017 Annual IEEE International
Systems Conference (SysCon), Presented at the 2017
Annual IEEE International Systems Conference
(SysCon), 1–8.
Soans, R.V., Pradyumna, G.R., Fukumizu, Y. (2018) ‘Object
Sorting using Image Processing’, in 2018 3rd IEEE
International Conference on Recent Trends in
Electronics, Information Communication Technology
(RTEICT), Presented at the 2018 3rd IEEE
International Conference on Recent Trends in
Electronics, Information Communication Technology
(RTEICT), 814–818.
Tekleyohannes, M., Sadri, M., Weis, C., Wehn, N., Klein, M.,
Siegrist, M. (2017) ‘An advanced embedded
architecture for connected component analysis in
industrial applications’, in Design, Automation Test
in Europe Conference Exhibition (DATE), 2017,
Presented at the Design, Automation Test in Europe
Conference Exhibition (DATE), 2017, 734–735.
Ullah, H., Mehmood, I. (2019) ‘Real-Time Video Dehazing
for Industrial Image Processing’, in 2019 13th
International Conference on Software, Knowledge,
Information Management and Applications (SKIMA),
Presented at the 2019 13th International Conference
on Software, Knowledge, Information Management
and Applications (SKIMA), 1–6.
Wilson, A. (2018) ‘Multiple 3D methods ease industrial
automation applications: Many 3D vision techniques
are used for robot guidance and industrial inspection
applications’, Vision Systems Design, 23(11), 15–18.
Xu, D., Huang, Q., Liu, H. (2016) ‘Object detection on robot
operation system’, in 2016 IEEE 11th Conference on
Industrial Electronics and Applications (ICIEA),
Presented at the 2016 IEEE 11th Conference on
Industrial Electronics and Applications (ICIEA),
1155–1159.
ResearchGate has not been able to resolve any citations for this publication.
Conference Paper
Full-text available
The number of smart factories is increasing day after day to reach the vision of Industry 4.0. Computer vision and image processing have important roles in the systems whose aim is unmanned production. In the industrial automation applications, computer vision is mostly used at the quality control stage. In this stage, there are many applications which use image-processing methods for object detection and classification but deep learning-based applications are rarely seen. In this work, a visual quality control automation application is proposed by using a camera placed over the assembly line in a smart factor model. The product is detected in an image obtained from the assembly line and then classified as "okay" or "not okay" using deep learning methods. After the deep learning-based quality control, the "okay" products continue their production stages and the "not okay" products are separated from the production line using a PLC, which controls the line. It is seen with this application that deep learning methods in automation applications will have an important role in transitioning to the industry 4.0.
Conference Paper
Full-text available
In recent years, especially in industrial automation systems, in order for robots to understand their distances and positions according to the target, close to human vision, computer vision systems are needed. Computer vision close to human vision can only be created using stereo cameras. In this study, a computer vision system is developed using the stereo camera system for measuring object distances. In the study, first of all, for the distance measurement, the distance of the face images obtained from the stereo camera system to the screen is calculated. In measuring the distances of the face images to screen, the disparity maps are first extracted and the face region is detected. Afterwards, the distance measurements are performed on the obtained images in the stereo camera system on account of calculating the shifts between the frames. In the experimental studies, the actual distance values such as 71, 74, 75, 79, 110, 125, and 115 of the face to the screen are measured as 70, 72, 73, 77, 97, 120, 132 cm by proposed system, respectively. When the experimental results are examined, we can say that the proposed computer vision system is successful in distance measurement using stereo camera view.
Conference Paper
Full-text available
Automation has led to the growth of industries in recent years. For better performance of industrial process automated machines are used. Image processing has led to advancements in applications of robotics and embedded systems. Sorting of objects are usually done by humans which takes a lot of time and effort. Using Computer Vision techniques, a conveyor belt system is developed using stepper, servo motors and mechanical structures, which can identify and sort various objects. This reduces human effort, time consumed, and also improves the time to market the products.
Conference Paper
Emerging through an industry-academia collaboration between the University of Sheffield and VBC Instrument Engineering Ltd, a proposed robotic solution for remanufacturing of jet engine compressor blades is under ongoing development, producing the first tangible results for evaluation. Having successfully overcome concept adaptation, funding mechanisms, design processes, with research and development trials, the stage of concept optimization and end-user application has commenced. A variety of new challenges is emerging, with multiple parameters requiring control and intelligence. An interlinked collaboration between operational controllers, Quality Assurance (QA) and Quality Control (QC) systems, databases, safety and monitoring systems, is creating a complex network, transforming the traditional manual re-manufacturing method to an advanced intelligent modern smart-factory. Incorporating machine vision systems for characterization, inspection and fault detection, alongside advanced real-time sensor data acquisition for monitoring and evaluating the welding process, a huge amount of valuable industrial data is produced. Information regarding each individual blade is combined with data acquired from the system, embedding data analytics and the concept of ìInternet of Thingsî (IoT) into the aerospace re-manufacturing industry. The aim of this paper is to give a first insight into the challenges of the development of an Industry 4.0 prototype system and an evaluation of first results of the operational prototype.
Conference Paper
The rapid growth of robotics and automation, especially during the last few years, its current positive impact and future projections for impact on the United States economy are very promising. This rapid growth of robotic automation in all sectors of industry will require an enormous number of technically sound specialists with the skills in industrial robotics and automation to maintain and monitor existing robots, enhance development of future technologies, and educate users on implementation and applications. It is critical, therefore, that educational institutions adequately respond to this high demand for robotics specialists by developing and offering appropriate courses geared towards professional certification in robotics and automation. In order to effectively teach concepts of industrial robotics, the curriculum needs to be supported by the hands on activities utilizing industrial robots or providing training on robotic simulation software. Nowadays, there is no robotic simulation software available to the academic institution at no cost which limits educational opportunities. As part of the NSF sponsored project, team of faculty members and students from Michigan Tech are developing new, open source "RobotRun" robotic simulation software which will be available at no cost for adaptation by the other institutions. This will allow current concepts related to industrial robotics to be taught even in locations without access to current robotics hardware. In addition, to teach emerging concepts of robotics, automation, and controls, authors present the design and development the state-of-the-art robotic workcell consisting of 3 FANUC industrial robots equipped with robotic vision system, programmable logic controller, a conveyer and various sensors. The workcell enables the development and programing of various industry-oriented scenerious and therefore provide students with the opportunity of gaining skills that are relevant to current industry needs.