ArticlePDF Available

Design and Development of a High Speed Sorting System Based on Machine Vision Guiding

Authors:

Abstract and Figures

In this paper, a vision-based control strategy to perform high speed pick-and-place tasks on automation product line is proposed, and relevant control software is develop. Using Delta robot to control a sucker to grasp disordered objects from one moving conveyer and then place them on the other in order. CCD camera gets one picture every time the conveyer moves a distance of ds. Objects position and shape are got after image processing. Target tracking method based on “Servo motor + synchronous conveyer” is used to fulfill the high speed porting operation real time. Experiments conducted on Delta robot sorting system demonstrate the efficiency and validity of the proposed vision-control strategy.
Content may be subject to copyright.
Physics Procedia 25 ( 2012 ) 1955 1965
1875-3892 © 2012 Published by Elsevier B.V. Selection and/or peer-review under responsibility of Garry Lee
doi: 10.1016/j.phpro.2012.03.335
2012 International Conference on Solid State Devices and Materials Science
Design and Development of a High Speed Sorting System
Based on Machine Vision Guiding
Wenchang Zhang, Jiangping Mei, Yabin Ding
*
School of Mechanical Engineering
Tianjin University
Tianjin, China
Abstract
In this paper, a vision-based control strategy to perform high speed pick-and-place tasks on automation product line is
proposed, and relevant control software is develop. Using Delta robot to control a sucker to grasp disordered objects
from one moving conveyer and then place them on the other in order. CCD camera gets one picture every time the
conveyer moves a distance of ds. Objects position and shape are got after image processing. Target tracking method
based on “Servo motor + synchronous conveyer” is used to fulfill the high speed porting operation real time.
Experiments conducted on Delta robot sorting system demonstrate the efficiency and validity of the proposed vision-
control strategy.
© 2011 Published by Elsevier Ltd. Selection and/or peer-review under responsibility of [name organizer]
Keywords- automatic production line, machine vision, targets tracking, Delta robot
1. Introduction
As a comprehensive technology, machine vision has been used widely in various fields [1][2] including
industrial fields making a significant contribution to ensuring competitiveness in modern industrial
manufacturing[3]. On automatic sorting lines, machine vision is used to detect and track the moving target
and guide sorting robot completing the sorting task. Much work was done about target recognition and
dynamic objects’ tracking. Wiehman W designed a computer device which recognized objects in time with
a mobile camera [4]. Zhang realized the grasping task on a moving conveyer at about 300mm/s with
dynamic visual feedback [5]. Allen designed a stereovision system tracking a moving target with the
velocity of about 250mm/s [6]. Wilson realized the visual servo control of robots using Kalman filter
estimates of relative pose[7]. Fu-Cheng YOU [8] proposed a system of combining specialized machine
vision software platform and current development tool VC++ to recognize and sort out mechanical parts on
*
NC Major Project (2011ZX04013-011)
Available online at www.sciencedirect.com
© 2012 Published by Elsevier B.V. Selection and/or peer-review under responsibility of Garry Lee
Open access under CC BY-NC-ND license.
Open access under CC BY-NC-ND license.
1956 Wenchang Zhang et al. / Physics Procedia 25 ( 2012 ) 1955 – 1965
the production line, which greatly improved development efficiency and running speed of the system.
Dong xia[9] presents a moving target detection based primarily on inter-frame difference method using
only visual sensing as input.
Based on the previous research, we propose a vision-based control strategy to perform high speed pick-
and-place tasks on delta robot to control a sucker to grasp disordered objects from one moving conveyer
and then place them on the other in order. The whole control system is composed of vision module and
motion control module. The former gets the position and shape of the objects on the moving conveyer. The
latter control the robot to complete the sorting tasks intelligently. Whether the robot can grasp the targets
correctly depends on the image processing and target tracking which is the core of machine vision system.
For the computational complexity reason, it is hard to fulfill the pick-and-place operation of the robot in
real time, especially when the system runs in a high speed. “Servo motor + synchronous conveyer” is used
to assistant the vision system to solve this problem. CCD camera gets one picture every time the conveyer
moves a distance of ds. Objects’ position and shape are got through image processing methods. The real
time positions of the objects depend on the conveyer velocity as they move with it. Then the velocity of the
conveyer is planned and object’s picking position is calculated reference to the velocity of delta robot.
Labview is select as the development environment and the vision-guide control software is developed.
2. System structure
As show in Figure 1, Delta robot is high speed, high precision performance 3-DOF parallel mechanism
for pick-and-place operations. It has realized ±0.01mm repeat positioning accuracy in space, and can
perform pick-and-place operations more than 120 times/min. Two conveyers are used for material
transmission. CCD camera is fixed above conveyerĉ with which disordered objects comes. Delta robot
picks the disordered objects from it and places them on conveyerĊ based on machine vision.
Figure 1. System structure
Vision module is composed of computer, image acquisition card, and CCD camera. Motion control
module is composed of computer, motion control card, serve drivers and motors. The communication
between vision module and motion control module is realized by using computer.
CCD camera
Conveyer ĉ
Conveyer Ċ
Delta robot
Wenchang Zhang et al. / Physics Procedia 25 ( 2012 ) 1955 – 1965
1957
Figure 2. The whole control system
Vision system is used to acquire objects’ information for the motion control of the system. A UNIQ
UP680CL Channel Link digital camera is chosen as the vision input device. This video camera used is
digital, monochrome unit with a (6.8mm×4.8mm)CCD and a resolution of 659×494 pexels. The vertical
distance of the camera and the conveyer is 600mm with a visual field about 280×180mm. NI PCI-1428
Camera Link image acquisition card is chosen and the vision development module IMAQ-Vision is used.
The motion control module is based on NI PCI-7356 motion control card which provides a 64kB Buffer. In
the Contouring control mode, the motor rotate one ring per 10000 pulses. The computer white the track
points into the Buffer and the motion control card read the data from the Buffer and drives the motor in real
time.
3. Machine vision system
3.1 Image acquisition and processing
In the real image acquirement system, image quality gets worse in the process of image formation,
transmission and reception. It is difficult to acquire the effective information. Image processing is used to
make the image easily for the computer to analysis and recognize targets. In this system computer vision
system is used to recognize different rings, image processing concludes image pretreatment and image
segmentation. The image changes to a binary image after image processing, the region of 1 means the
target objects, and 0 means the background.
In binary image, object area can be easily defined as the number of pixels that object boundary
surrounds and it is related as the size of itself. Define the target (p×q)image matrix[7]

11
,
MN
pq
pq
ij
mijBij
¦¦
(1)
Where MǃN means the length and width of target region. means pixel value˄0 or 1˅. The formula
used to calculate is:
11
00
[, ]
nm
ij
A
Bi j

¦¦
(2)
And the centroid of target object is
Motion module
Vision module
1958 Wenchang Zhang et al. / Physics Procedia 25 ( 2012 ) 1955 – 1965




11
11
11
11
,
,
,
,
MN
ij
MN
ij
MN
ij
MN
ij
iB i j
u
Bij
jB i j
v
Bij
°
°
°
°
°
®
°
°
°
°
°
¯
¦¦
¦¦
¦¦
¦¦
(3)
Figure 3 shows the image acquisition and image processing program by Labview software. With the
calibration program of Labview software, the coordinate of the object in world coordinate system is got.
Figure 3. Machine vision module
3.2 Dynamic target tracking
In order to fulfill the high speed pick-and-place operation of the robot in real time, “Servo motor +
synchronous conveyer” is used to assistant the vision system. Through image processing, we get objects’
information at the moment when we acquire an image. The conveyer is controlled by a serve motor, the
position pulse of which can be read from the code disc in time. So serve motor position pulse parameter
can be used for the tracking of dynamic objects in real time when the objects move with the conveyer.
With this parameter, the object can be recorded as

T
nnnn
x
yAc
, and with the movement of the conveyer,
the position of the object is
()
nn
n
x
xccdc
x
yyy
§·
§·
¨¸
¨¸
©¹
©¹
(4)
Where c is the code disc of the serve motor in real time, is the length the conveyer moves per pulse.
In order to decrease the computer load and not leaving out a target, CCD camera gets one picture every
time the conveyer moves a distance of ds
ds M
M]
(5)
Where M is the length of the camera field of view in the direction of the conveyer,
is the maximum
length of object.
]
is used to avoid missing objects. At last, compare the objects in this image with in the
Wenchang Zhang et al. / Physics Procedia 25 ( 2012 ) 1955 – 1965
1959
former one and delete the repeated objects as shown in figure 4. This object tracking strategy decreases
calculation amount largely and can fulfill the high speed sorting operations in real time.
(a)Three continuous images
(b) Terminal result
Figure 4. Dynamic targets recognition strategy
4. Sorting Control Strategy
4.1 Control strategy of Delta robot
In industry application, high speed sorting robot usually runs a “door” path as shown in figure
5(dashed line) including three line segments[10]:
12 1
()PP S
JJJJG
,
23 2
()PP S
JJJJG
and
34 3
()PP S
JJJJG
with the acceleration
as shown in Eq.(6). In order to improve the efficiency of the robot, the second state begins at
5
P and the
third state begins at
7
P . Then the robot runs to
6
P when the first state finishes, and it runs to
8
P when the
second state finishes (solid line).
5
P is the middle point of
12
PP , and
8
P is the middle point of
34
PP .
max
max
max
max
max
4
sin( ) (0 )
8
3
()
88
43 3 5
cos ( ) ( )
888
57
()
88
47 7
cos ( ) ( )
88
T
at t
T
TT
at
TTT
at t
a
T
TT
at
T
atT tT
T
S
S
S
dd
°
°
°
d
°
°
ªº
°
dd
®
«»
¬¼
°
°
dd
°
°
°
ªº
dd
°
«»
¬¼
¯
(6)
1960 Wenchang Zhang et al. / Physics Procedia 25 ( 2012 ) 1955 – 1965
Figure 5. Robot running path
Where
max
v means the maximum acceleration of the robot, T is the total time the robot finishes
running one line segment. Then the time the robot runs from
1
v to
22 1
()vv v! is
123
/2 /2tT T T
(7)
Where
max
11
()
88
i
i
S
T
a
S
(i=1-3)
4.2 Velocity of the conveyer
On automatic sorting line, objects on the conveyer come with different densities. The conveyer should
move faster when there is a lower density, and slower when there is a higher density to fit the work cycle
of sorting robot and ensure a higher sorting efficiency without leaving out any object. The next-picking
object’s position is used to plan the velocity of conveyer as shown in Figure 5.
Figure 5. Velocity control of the conveyer
The maximum velocity of the conveyer is
max
v , (
min
x
,
max
x
)is the operation space of the system
(depending on the robot working space). The position of the next-picking object is
(, )
x
y when the robot
begins to move,
t
v is the velocity of the conveyer at the moment. Then in the process of the movement of
robot, the conveyer moves at a speed of
Wenchang Zhang et al. / Physics Procedia 25 ( 2012 ) 1955 – 1965
1961
max
max max max max
max
,0
()/,0
0,
vx
vvx xx xx
xx
d
°
d
®
°
¯
(8)
In order to decrease the impact to the system of the conveyer-velocity change, sine rule is used in the
velocity-change process. The maximum acceleration of the conveyer is
max
a
c
, it changes from
1
v to
22 1
()vv v! during a time of . Then during this time, the acceleration of the conveyer is
max
23
(1 sin( ))
2
t
aa
T
SS
c
c
c
(9)
As we know , so
21
max
vv
T
a
c
c
(10)
4.3 Pick position calculation
With the conveyer moves in the direction of axis x, the real time position of the object is
0
t
n
n
x
xvdt
yy
°
®
°
¯
³
(11)
Where
(, )
nn
x
y is the position coordinate of the target at the time the robot begins to run to grasp it, v is
the velocity of the conveyer. Eq. (7) and Eq. (11) leads to
2121 1
2
max
max
22
00
3
max max
21
max
1
(
11
22
()
88
()( )
1
11 11
2
() ()
88 88
)
n
vvvv S
xx v
a
a
xx yy
S
aa
vv
a
S
SS

c



c
(12)
Eq. (12)is too complex to solve directly, Newton’s dichotomy is used here. Figure 6 shows the curve of
Eq. (7) and Eq. (11)
1962 Wenchang Zhang et al. / Physics Procedia 25 ( 2012 ) 1955 – 1965
Figure 6. Curve of Eq. (7) and Eq. (11)
Construct function as follow
2121
max
1
2
max
22
00
max
3
21
max
max
()
2
1
(
111
2
()
664
()( )
111
()
10 10 4
1
)
111
2
()
664
n
vvvv
Fx x
a
S
v
a
xx yy
a
S
vv
x
a
a
S
S
S

c




c

(13)
Eq. (12) changes to
() 0Fx . There are 4 theoretical solutions exist. But in the space of operation
space, there are three situations based on Figure 6
a) Only one solution and
0
x
x
b) Two solutions and
1020
,
x
xx x!
c) Only one solution and
0
x
x!
When situation 2 happens, the first solution is used in order to improve efficiency of the system. Define
a micro amount
[
, when ()Fx
[
, regard the x as the solution needed. Newton’s dichotomy is used as
follow
Step 1: If
0n
x
x , let be
120
,
n
x
xx x , then
1
() 0Fx ! , if
2
()0Fx , let be
2max
x
x ;
If
0n
x
x! , let be
12max
,
n
x
xx x
Step 2: Let be
12
3
2
x
x
x
, if
()Fx
[
,
3
x
is the solution we need. Or else go to step 3
Step 3: If
2
()0Fx !
, let be
13
x
x , if
2
()0Fx , let be
23
x
x and return to step 2
4.4 System process and experiment
In order to ensure the system work fluently, target data
Wenchang Zhang et al. / Physics Procedia 25 ( 2012 ) 1955 – 1965
1963
Figure 7. System control flow
base is used to exchange information between computer vision module and motion control module.
Targets’ information is written into the data base by computer vision module. Motion control module read
targets’ information from the data base and control the motion of the whole system realizing the sorting
tasks in high speed. Figure 7 shows the whole process of the system. When it is applied, define the
maximum velocity of the conveyer 400mm/s, and , he maximum acceleration of the robot , the system
realized the sorting task of two different pieces at more than 120 times/min as shown in figure 8.
5. Conclusions
A vision-based control strategy to perform high speed pick-and-place tasks in automation product line
has been proposed. Vision system is used to get the position and shape of objects and “Servo motor +
synchronous conveyer” is used to assistant the target tracking to fulfill the pick-and-place operation of the
robot in real time, motion control system coordinate and plan the robot’s movement with the conveyer to
complete the sorting task. Experiments conducted on Delta robot sorting system demonstrate the efficiency
and validity of the proposed control strategy.
6. Acknowledgment
The authors thanks to all the people who ware involved in this work: Tian Huang, Liangan Zhang,
Panfeng Wang, Limin Zhang, Yi Li, Yang Tan, Ben Gao and Dongxing Yu.
1964 Wenchang Zhang et al. / Physics Procedia 25 ( 2012 ) 1955 – 1965
Figure 8. Sorting experiment
References
[1] Kamarul Hawari Ghazali and Saifudin Razali, Machine Vision System for Automatic Weeding Strategy in Oil Palm
Plantation using Image Filtering Technique, Information and Communication Technologies: From Theory to Applications, 2008.
ICTTA 2008. 3rd International Conference on, 2008, pp. 1 – 5.
[2] Lu Ren, Lidai Wang, Mills.J.K and Dong Sun, 3-D Automatic Microassembly by Vision-Based Control, Intelligent
Robots and Systems, 2007. IROS 2007. IEEE/RSJ International Conference on , 2007 , pp. 297 – 302.
[3] Dr. A.M. Wallace. Industrial applications of computer vision since1982. IEE PROCEEDINGS, Vol. 135, Pt. E, No. 3,
1988, pp. 117-136.
Wenchang Zhang et al. / Physics Procedia 25 ( 2012 ) 1955 – 1965
1965
[4] Wiehman W. Use of oPtical feed back in the computer control of an arm. AI memo55Stanford AI ProjectAugust
1967.
[5] D.B.ZhangL.V.Gool and A.oosterlinck. Stochastic predictive control of robot tracking systems with dynamic visual
feedback. Proc.IEEE Int Conf. Robotics and Automation, 1990, pp.610-615.
[6] P.K.Allen, B.Yoshimi and A.Tmcenko. Real-time visual servoing. Proc.IEEE Int Conf. Robotics and
Automation,1991,pp. 851-856.
[7] W.WilsonˈVisual servo control of robots using Kalman filter estimates of relative poseˈProc.IFAC 12th World
CongressˈSydney˖1993ˈpp.399-404.
[8] Fu-Cheng You and Yong-Bin Zhang, A Mechanical Part Sorting System Based on Computer Vision, Computer Science
and Software Engineering, 2008 International Conference on, 2008 , pp. 860 – 863.
[9] Dong Xia and Wang Kedian, Analysis of Dynamic Images in Machine Vision and Its Application Study in Motion
Control Mechatronics and Machine Vision in Practice, 2007. M2VIP 2007. 14th International Conference, 2007 , pp. 112 – 117.
[10] Jiangping Mei, Panfeng Wang and Tian Huang, Path planning and motion control of a 2-dof translational high-speed
parallel manipulator, Manufacturing Automation, vol. 26(9), pp. 29-33..
... Such vision-based closed-loop control enhances accuracy, safety, flexibility, reliability, functionality and efficiency in robotic automation while also reducing the need for com-plex fixtures. Visual servoing has been adopted in a wide range of robotic applications such as pick and place [4], sorting [5], inspection [6], monitoring [7], parts assembly and disassembly [8], harvesting [9], assistive surgery [10] etc. Moreover, visual servoing have been deployed in robotic manipulators [11], unmanned ground vehicles (UGV) [12], unmanned ariel vehicles (UAV) [13], [14], unmanned under-water vehicles (UUV) [15], space robots [16], human-robot interaction (HRI) [17] and multi-robot systems (MRS) [18]. ...
... In modern industries ranging from e-commerce warehouses [24] and production/assembly lines [5] to domestic assistive robots for daily living [25], there has been an increased use of vision based robot control to perform manipulation tasks on static and moving objects. In the literature, several visual servoing pipelines have been devised to address these problems in both indoors and outdoors settings using a variety of control strategies and manipulator designs [24]- [30]. ...
... Input: Stream of events e i = x i , y i , t i , P ol i , Contiguity threshold C th Output: Command velocity V c . 1 Initialize three layers of surface of active events SAE, SACE and SAVE. 2 Initialize a desired feature event p cc (eg: center of the sensor plane) in SAVE 3 Initialize switching strategy 4 for each e i do 5 Detect corners in SAE by applying e-Harris, and project corner events to SACE. 6 Extract object corners in SACE using heatmaps. ...
Article
Full-text available
Robotic vision plays a major role in factory automation to service robot applications. However, the traditional use of frame-based cameras sets a limitation on continuous visual feedback due to their low sampling rate, poor performance in low light conditions and redundant data in real-time image processing, especially in the case of high-speed tasks. Neuromorphic event-based vision is a recent technology that gives human-like vision capabilities such as observing the dynamic changes asynchronously at a high temporal resolution (1μs) with low latency and wide dynamic range. In this paper, for the first time, we present a purely event-based visual servoing method using a neuromorphic camera in an eye-in-hand configuration for the grasping pipeline of a robotic manipulator. We devise three surface layers of active events to directly process the incoming stream of events from relative motion. A purely event-based approach is used to detect corner features, localize them robustly using heatmaps and generate virtual features for tracking and grasp alignment. Based on the visual feedback, the motion of the robot is controlled to make the temporal upcoming event features converge to the desired event in Spatio-temporal space. The controller switches its operation such that it explores the workspace, reaches the target object and achieves a stable grasp. The event-based visual servoing (EBVS) method is comprehensively studied and validated experimentally using a commercial robot manipulator in an eye-in-hand configuration for both static and dynamic targets. Experimental results show superior performance of the EBVS method over frame-based vision, especially in high-speed operations and poor lighting conditions. As such, EBVS overcomes the issues of motion blur, lighting and exposure timing that exist in conventional frame-based visual servoing methods.
... Such vision-based closed-loop control enhances accuracy, safety, flexibility, reliability, functionality and efficiency in robotic automation while also reducing the need for com-plex fixtures. Visual servoing has been adopted in a wide range of robotic applications such as pick and place [4], sorting [5], inspection [6], monitoring [7], parts assembly and disassembly [8], harvesting [9], assistive surgery [10] etc. Moreover, visual servoing have been deployed in robotic manipulators [11], unmanned ground vehicles (UGV) [12], unmanned ariel vehicles (UAV) [13], [14], unmanned under-water vehicles (UUV) [15], space robots [16], human-robot interaction (HRI) [17] and multi-robot systems (MRS) [18]. ...
... In modern industries ranging from e-commerce warehouses [24] and production/assembly lines [5] to domestic assistive robots for daily living [25], there has been an increased use of vision based robot control to perform manipulation tasks on static and moving objects. In the literature, several visual servoing pipelines have been devised to address these problems in both indoors and outdoors settings using a variety of control strategies and manipulator designs [24]- [30]. ...
... Input: Stream of events e i = x i , y i , t i , P ol i , Contiguity threshold C th Output: Command velocity V c . 1 Initialize three layers of surface of active events SAE, SACE and SAVE. 2 Initialize a desired feature event p cc (eg: center of the sensor plane) in SAVE 3 Initialize switching strategy 4 for each e i do 5 Detect corners in SAE by applying e-Harris, and project corner events to SACE. 6 Extract object corners in SACE using heatmaps. ...
Preprint
Robotic vision plays a major role in factory automation to service robot applications. However, the traditional use of frame-based cameras sets a limitation on continuous visual feedback due to their low sampling rate, poor performance in low light conditions and redundant data in real-time image processing, especially in the case of high-speed tasks. Neuromorphic event-based vision is a recent technology that gives human-like vision capabilities such as observing the dynamic changes asynchronously at a high temporal resolution (1μs) with low latency and wide dynamic range. In this paper, for the first time, we present a purely event-based visual servoing method using a neuromorphic camera in an eye-in-hand configuration for the grasping pipeline of a robotic manipulator. We devise three surface layers of active events to directly process the incoming stream of events from relative motion. A purely event-based approach is used to detect corner features, localize them robustly using heatmaps and generate virtual features for tracking and grasp alignment. Based on the visual feedback, the motion of the robot is controlled to make the temporal upcoming event features converge to the desired event in Spatio-temporal space. The controller switches its operation such that it explores the workspace, reaches the target object and achieves a stable grasp. The event-based visual servoing (EBVS) method is comprehensively studied and validated experimentally using a commercial robot manipulator in an eye-in-hand configuration for both static and dynamic targets. Experimental results show superior performance of the EBVS method over frame-based vision, especially in high-speed operations and poor lighting conditions. As such, EBVS overcomes the issues of motion blur, lighting and exposure timing that exist in conventional frame-based visual servoing methods.
... A charge-coupled device (CCD) camera was used on automated sorting system to detect object shape and position. The system was also equipped by target tracking that work based on synchronous conveyer and servo motor [3]. A CCD camera was also used in sorting system to classify gear based several feature including number of teeth, number of holes, or its colour. ...
... A 3 degree of freedom (DOF) parallel mechanism robot can be combined to develop a sorting system. This robot can perform repeating position with accuracy ±0.01mm in space, and can conduct more than 120 times/min pick-and-place operations [3]. A coordination system between 5-DOF anthropomorphic robot and stationary vision system can be implemented to handle an object [7]. ...
... In recent times, various sorting systems have been developed. The applications of sorting varies from agricultural products, consumer manufactured products, books, etc. Constantin and Michael in 2002 reported that every sorting methodology can be classified based on the specification of two issues: (1) the form of the criteria aggregation model which is developed for sorting purposes, and (2) the methodology employed to define the parameters of the sorting model [7][8][9][10][11][12][13][14]. Few researches were also based on automatic sorting, manual sorting and online sorting methods. ...
... The solution to this problem proves that the automated sorting process can increase the ability to manage tasks and reduce the average time of moving the workpiece in the system significantly. [3] shows that Delta robots can identify products by the physical characteristics but cannot prove that the product sorting system can be sorted correctly and faster than the human labor system. ...
Article
Full-text available
We propose an approach for enhancing the throughput level and decreasing the throughput time of the hard disk drive sorting department that are congested and inefficient. We model the hard disk drive sorting department with discrete-event simulation on Arena. The proposed automated loop conveyor system can reduce the waiting time, the work-in-process and the time in system by 74.78%, 85.09% and 96.85%, respectively when comparing with the current sorting system.
... The use of industrial robots is conducive to improving the level of social productivity, which is of great significance to traditional production methods. [1][2][3] In the logistics industry, there are problems such as low efficiency, high cost and high cost of traditional manual sorting operations. At the same time, in intelligent logistics, the use of machine vision for intelligent sorting tasks becomes more and more obvious. ...
... However, the sorting speed is slow, especially for large inertia robots. Zhang et al. 7 proposed a method to control the conveyor speed according to the distribution density of workpieces on the conveyor to ensure that the robot is always in the fastest sorting speed. But, it is a time-consuming process to ensure high-accuracy and high-speed performances by adjusting controller parameters, 8 and this method is difficult to achieve and does not meet the requirements of the production cycle. ...
Article
Full-text available
To improve the sorting accuracy and efficiency of sorting system with large inertia robot, this article proposes a novel trajectory planning method based on S-shaped acceleration/deceleration algorithm. Firstly, a novel displacement segmentation method based on assumed maximum velocity is proposed to reduce the computational load of velocity planning. The sorting area can be divided into four parts by no more than three steps. Secondly, since the positions of workpieces are dynamically changing, a dynamic prediction method of workpiece picking position has been presented to consider all the possible positions of the robot and the workpiece, so as to realize the picking position prediction of the workpiece at any positions. Each situation in this method can constitute an equation with only one solution, and the existence of the solution can be verified by the proposed graphical method. The simulations of the motion time of the sorting process show that the proposed method can significantly shorten the sorting time and improve the sorting efficiency compared with the previous method. Finally, this method was applied to the Selective Compliance Assembly Robot Arm (SCARA) robot for experiments. In the physical picking experiment, the missing-pick rate was less than 1%, which demonstrates the efficiency and effectiveness of this method.
Article
The vision-based automatic systems for in-line detection, identification, and separation are widely used in the industry, and it is difficult for such systems to achieve a simplified structure, high speed, high efficiency, and integrated coordinated control. Taking the dual-energy x-ray transmission solid waste high-speed sorting line as an example, an automatic control system was proposed, integrating data reading, image processing, sequential logic control, communication, and human–machine interface based on a personal computer with a general operating system. The hardware platform was introduced, and the design principles of operation parameters were investigated. The software was developed in C language with Microsoft Visual Studio 2012. The multi-core multi-thread technology has been utilized in which the optimized process-thread settings (OPTS), first-in first-out (FIFO) stacker, variable sharing between threads, and encoder position synchronization were employed. The whole system was experimentally verified at the line speed of 1 m/s. The results showed that the average scan cycle of sequential logic control was only 15 µs, which could fully ensure the real-time high-speed logic control and accuracy of position synchronization, OPTS improved the stability and disturbance rejection, FIFO stacker and variable sharing between threads was adapted to the realistic buffer materials, the encoder position synchronization avoided the accumulation of measurement errors, and the main operations run in parallel and coordinately and were suitable for the high-speed separation of multiple columns of irregular materials. The presented control system has the advantages of near real-time, high speed, high efficiency, low cost, easy reconstruction, and capability to manage and control integration and has a good practical application value.
Conference Paper
Full-text available
Machine vision is an application of computer vision to automate conventional work in industry, manufacturing or any other field. Nowadays, people in agriculture industry have embarked into research on implementation of engineering technology in their farming activities. One of the precision farming activities that involve machine vision system is automatic weeding strategy. Automatic weeding strategy in oil palm plantation could minimize the volume of herbicides that is sprayed to the fields. This paper discusses an automatic weeding strategy in oil palm plantation using machine vision system for the detection and differential spraying of weeds. The implementation of vision system involved the used of image processing technique to analyze weed images in order to recognized and distinguished its types. Image filtering technique has been used to process the images as well as a feature extraction method to classify the type of weed images. As a result, the image processing technique contributes a promising result of classification to be implemented in machine vision system for automated weeding strategy.
Conference Paper
In this paper, we propose a vision control strategy to perform automatic microassembly tasks in three- dimension (3-D), and develop relevant control software. Specifically, using a 6 degree-of-freedom (DOF) robotic workstation to control a passive microgripper to automatically grasp a designated micropart from the chip, pivot the micropart, and then move the micropart to vertically insert into a designated slot on the chip. In the proposed control strategy, the whole microassembly task is divided into two subtasks, micro-grasping and micro-joining, in sequence. To guarantee the success of microassembly and manipulation accuracy, two different two-stage feedback motion strategies, the pattern matching and auto-focus method are employed, with the use of vision-based control system and the vision control software developed. Experiments conducted demonstrate the efficiency and validity of the proposed control strategy.
Conference Paper
In order to sort out different parts, computer vision is usually used to recognize parts with features on the sorting production line of mechanical parts. As for the core of machine vision, the design of computer vision software, VC++ program method is usually adopted to develop it. The developers need to encapsulate a special class for image process. And methods of pattern recognition are used to recognize object, whose development efficiency is very low. It is difficult to solve the problem of code optimization and the development efficiency of the method. In this paper the system of combining specialized machine vision software platform (e.g. HDevelop) and current development tool VC++ is proposed to recognize and sort out mechanical parts on the production line, so as to greatly improve development efficiency and running speed of the system. In the end of this paper, an experimental example is given to sort various mechanical parts in image by calculating circularity and area features and using recognition rules.
Conference Paper
Recent advances in high performance computing coupled with the decreasing cost of hardware now make machine vision a financially viable inspection option for even small and medium sized firms. Amongst many problems relative to the machine vision applications, the detection and tracking of moving targets is very important in many cases including industrial fields. Our challenge is to develop an effective methodology on analysis of dynamic images able to extract the moving target by using only visual sensing as input and keeping computation time and hardware cost to a minimum, typically with a standard Pentium-based computer and a standard CCD camera. This paper presents a segmentation method combining dynamic segmentation with static segmentation and examines moving object detection based primarily on inter-frame difference method. We applied the methods into the motion control of a familiar work table driving system only based on machine vision. The experimental results demonstrate that in realistic situations, detection of moving targets by using inter-frame difference method is more effective than that by using optical flow method.
Conference Paper
A vision-guided robot workstation is presented which picks up workpieces from a fast-moving conveyor belt. The role of computer vision as the feedback transducer strongly affects the closed-loop dynamics of the overall system, and a tracking controller with dynamic visual feedback is designed for achieving fast response and high control accuracy. In view of the long time delay and the heavy noise corruption embedded in visual data, the problem of visual controller design is posed in the framework of stochastic optimal control theory. The Kalman filter is chosen to estimate the state of the target motion and formulated as a joint detection and adaptive estimation method. The generalized predictive control strategy is utilized to compute the optimal path control data and implemented in a weighted version. Experimental results are given to show the effectiveness of the approach
Conference Paper
A real-time tracking algorithm in conjunction with a predictive filter to allow real-time visual servoing of a robotic arm that is tracking a moving object is described. The system consists of two calibrated (but unregistered) cameras that provide images to a real-time, pipeline-parallel optic-flow algorithm that can robustly compute optic-flow and calculate the 3-D position of a moving object at approximately 5-Hz rates. These 3-D positions of the moving object serve as input to a predictive kinematic control algorithm that uses an α-β-γ filter to update the position of a robotic arm tracking the moving object. Experimental results are presented for the tracking of a moving model train in a variety of different trajectories
Article
During the past six years, the use of computer vision systems for industrial applications has become increasingly widespread. In the paper, the application of vision in this context is surveyed and reviewed, according to the principal current application areas of automated visual inspection and visually guided robotic manipulation. Additionally, recently published research and development work for the identification and location of industrial components is discussed, as this forms the basis for improving the existing highly-constrained application of machine vision. The survey is restricted to the period since 1982 in order to complement existing publications at or around that time.
Use of oPtical feed back in the computer control of an arm. AI memo55 Stanford AI Project
  • W Wiehman
Wiehman W. Use of oPtical feed back in the computer control of an arm. AI memo55 Stanford AI Project August 1967.
Path planning and motion control of a 2-dof translational high-speed parallel manipulator
  • Jiangping Mei
  • Panfeng Wang
  • Tian Huang
Jiangping Mei, Panfeng Wang and Tian Huang, Path planning and motion control of a 2-dof translational high-speed parallel manipulator, Manufacturing Automation, vol. 26(9), pp. 29-33..
Machine Vision System for Automatic Weeding Strategy in Oil Palm Plantation using Image Filtering Technique, Information and Communication Technologies: From Theory to Applications
  • Hawari Kamarul
  • Saifudin Ghazali
  • Razali
Kamarul Hawari Ghazali and Saifudin Razali, Machine Vision System for Automatic Weeding Strategy in Oil Palm Plantation using Image Filtering Technique, Information and Communication Technologies: From Theory to Applications, 2008. ICTTA 2008. 3rd International Conference on, 2008, pp. 1 -5.