ArticlePDF Available

Development of Marker-Free Night-Vision Displacement Sensor System by Using Image Convex Hull Optimization

MDPI
Sensors
Authors:

Abstract and Figures

Vision-based displacement sensors (VDSs) have the potential to be widely used in the structural health monitoring field, because the VDSs are generally easier to install and have higher applicability to the existing structures compared to the other conventional displacement sensors. However, the VDS also has disadvantages, in that ancillary markers are needed for extracting displacement data and data reliability is significantly lowered at night. In this study, a night vision displacement sensor (NVDS) was proposed to overcome the aforementioned two limitations. First, a non-contact NVDS system is developed with the installation of the infrared (IR) pass filter. Since it utilizes the wavelength of the infrared region and it is not sensitive to the change of a visible ray, it can precisely extract the shape information of the structure even at night. Second, a technique to extract the feature points from the images without any ancillary marker was formulated through an image convex hull optimization. Finally, the experimental tests of a three-story scaled model were performed to investigate the effectiveness of proposed NVDS at night. The results demonstrate that the NVDS has sufficiently high accuracy even at night and it can precisely measure the dynamic characteristics such as mode shapes and natural frequencies of the structure. The proposed system and formulation would extend the applicability of vision sensor not only into night-time measure but also marker-free measure.
This content is subject to copyright.
sensors
Article
Development of Marker-Free Night-Vision
Displacement Sensor System by Using Image Convex
Hull Optimization
Insub Choi, JunHee Kim * and Jisang Jang
Department of Architectural Engineering, Yonsei University, 50 Yonseiro, Seodaemun-gu, Seoul 120-749, Korea;
insub@yonsei.ac.kr (I.C.); gjjs6645@naver.com (J.J.)
*Correspondence: junhkim@yonsei.ac.kr; Tel.: +82-2-2123-2783
Received: 29 October 2018; Accepted: 19 November 2018; Published: 27 November 2018


Abstract:
Vision-based displacement sensors (VDSs) have the potential to be widely used in the
structural health monitoring field, because the VDSs are generally easier to install and have higher
applicability to the existing structures compared to the other conventional displacement sensors.
However, the VDS also has disadvantages, in that ancillary markers are needed for extracting
displacement data and data reliability is significantly lowered at night. In this study, a night vision
displacement sensor (NVDS) was proposed to overcome the aforementioned two limitations. First,
a non-contact NVDS system is developed with the installation of the infrared (IR) pass filter. Since
it utilizes the wavelength of the infrared region and it is not sensitive to the change of a visible ray,
it can precisely extract the shape information of the structure even at night. Second, a technique to
extract the feature points from the images without any ancillary marker was formulated through an
image convex hull optimization. Finally, the experimental tests of a three-story scaled model were
performed to investigate the effectiveness of proposed NVDS at night. The results demonstrate that
the NVDS has sufficiently high accuracy even at night and it can precisely measure the dynamic
characteristics such as mode shapes and natural frequencies of the structure. The proposed system
and formulation would extend the applicability of vision sensor not only into night-time measure but
also marker-free measure.
Keywords:
vision-based displacement sensor; night vision; image convex hull; non-target;
dynamic characteristics
1. Introduction
In the modern communities, where there are a high population and a high building density in the
downtown areas, it is becoming more important than ever to identify and evaluate the status of the
existing buildings. As the evaluation of the status of a building is done based on the data containing
the building’s behavior information, there is need for a technology that can acquire the behavior data
of a building with high reliability. The obtained data of a typical building include the displacement
or acceleration. The displacement data are said to be useful for quickly and intuitively evaluating
the status of a building through a comparison with the allowable displacement at the design stage.
It has been widely utilized in the system identification (SI) technique, which evaluates the status of
a building by determining its dynamic characteristics, or in the damage assessment technique, which
evaluates the building damage [
1
5
]. In other words, it is important to acquire the displacement data,
which are raw data containing the behavior information of a structure for evaluating its status.
The vision-based displacement sensor (VDS) extracts displacement data by processing the
captured image of the structure. It is a non-contact sensor that is easy to install and entails low
Sensors 2018,18, 4151; doi:10.3390/s18124151 www.mdpi.com/journal/sensors
Sensors 2018,18, 4151 2 of 17
costs. In general, contact-type sensors such as the linear variable displacement transformer (LVDT)
are mostly used to extract displacement data; these contact-type sensors, however, have a limitation:
support is needed for the fixed sensor and as such, it is difficult to obtain the displacement data at the
point where monitoring is important, such as at the central part of the beam [
6
]. Other non-contact
sensors include the global positioning system (GPS) and the laser displacement sensor (LDS) but they
have difficulty extracting the dynamic characteristics, which are closely related to the damage of the
structure, as the resolution of GPS is
±
10 mm horizontally and
±
20 mm vertically [
7
]. The accuracy of
the LDS is about 0.03 mm, which can be applied to the structure but the effective distance for data
acquisition is short and support is needed for the sensor [
8
]. As the VDS, however, does not require
a data logger or the support of the part to be measured and can acquire the displacement data of the
building from a distance, it can be widely utilized in the existing buildings. The data logger refers to
an electronic device that records electronic signals over time of sensors such as LVDS and LDS.
Since the early 1990s, the displacement data required for the health monitoring of structures have
been obtained through the VDS. The VDS method was mainly applied to measure the displacement of
bridge. Technologies such as algorithms for extracting the feature points from the images in the VDS
have been developed in the field of electrical engineering and electronics (Circle Hough Transform [
9
],
Canny Edge Detector [
10
], Harris Corner Detector [
11
]). To easily extract the feature points from the
images, various markers (circle marker plate, pattern plate, light-emitting diodes [LEDs]) are attached
to the area where the displacement of the structure is to be measured [
12
15
]. Then the feature points
extracted from each image are tracked and the displacement data are obtained by converting the image
coordinate system to the physical coordinate system from the geometry of the marker. The analysis
that was done in the existing studies verified the reliability of the displacement data obtained from the
VDS using the marker and also found that the VDS has a high degree of precision [
16
20
]. The method
of using an artificial marker, however, has disadvantages: it can obtain accurate data only by fixing
the marker rigidly onto the structure, there is a possibility that the marker will fall off and only the
displacement data of the area to which the marker is attached can be obtained.
In this regard, many studies have been conducted of late to acquire displacement data from
images without using a marker. Feng et al. [
21
,
22
] conducted a study on the method of extracting
displacement data using natural markers such as bolt holes and rivets in structures. As this method,
however, can acquire the displacement data only of the area where a natural marker is present in an
actual structure, the displacement data of a desired part of the structure cannot be obtained through the
VDS. Generally, the method that is used to obtain the displacement data of a structure without using
a marker is the digital image correlation (DIC) method, which matches the gray-scale distribution
of the image before and after the deformation of the structure to capture the feature points [
23
25
].
According to the research results, the DIC technique has sufficiently higher accuracy than the marker
and LVDT, which is a conventional contact-type sensor. Although DIC can handle some of the changes
in light intensity, there are needed more applications to solve various time-critical problems [
26
].
Bartilson et al. [
27
] proposed a method of extracting displacement data without an ancillary marker,
based on MQD (minimum of the quadratic difference) algorithm. While the principle of the MQD
algorithm is similar to the DIC technique, the MQD algorithm is an improved method in terms of
computational cost because it performs cross-correlation algorithm using fast furrier transform (FFT).
The optical flow-based Harris corner detector [
11
] is a method to extract the displacement from the
corner of the structure. According to the related studies [
28
30
]. The structural displacement can be
reliably extract without ancillary markers at the desired position of the structure. In recent years, quick
and fine movement of the structures can be captured by measuring the displacement using high-speed
camera and optical flow-based method [
31
33
]. However, since the basic principle of this method is to
maintain the brightness constancy, the accuracy is lowered when the brightness of the light is varied.
The VDS is easier to install than the other displacement sensors and can easily obtain data but
the reliability of the data largely depends on natural factors such as the heat waves or camera shake.
Especially in the case of the structure, 24-h monitoring is essential but because the input data of the
Sensors 2018,18, 4151 3 of 17
VDS is the wavelength of the visible light region, the VDS has a disadvantage: the reliability of the
data is drastically lowered at night, when the visible ray is lacking. Research using LED has been
conducted to overcome this disadvantage [
34
] but it poses problems such as reduction of the accuracy
of the displacement data due to the LED light leaks, in addition to the aforementioned problem caused
by the marker.
Night vision has been known since 1974, when the U.S. Army began to use an image intensifier
to which an image intensification technique was applied. The image intensifier detects part of the
infrared region but it is a technology for securing visibility at night only by electrically amplifying
the photons in the visible ray region near the green region, which is basically sensitive to the human
eye. The infrared camera uses a technology that detects the wavelength of the infrared region through
the radiant heat emitted from an object and that converts it to an image. Likewise, although the
image intensifier and the infrared camera using the wavelength of the infrared region at night have
different basic principles, the two technologies are used to increase the visibility at night. As the
equipment used for the image intensifier is expensive, night vision is commonly applied using the
infrared camera [
35
,
36
]. The image sensors used in general cameras such as the charge-coupled device
(CCD) or complementary metal-oxide-semiconductor (CMOS) can detect wavelengths ranging from
400 to 1000 nm, which are the wavelengths of light in the infrared region, including the ultraviolet and
visible rays. In other words, as it is possible to detect the infrared region using a general commercial
camera, the hardware performance for developing a displacement sensor suitable for night vision with
the use of the commercial camera is said to be already available.
The purpose of this study is to propose a night vision displacement sensor (NVDS), which is
marker-less VDS optimized for nighttime. The proposed method consists of that for the hardware part,
which includes the method of installing the infrared (IR) pass filter of a commercial camera and that
for the software part, consisting of a convex hull optimization model that extracts the feature points
from the obtained image data without ancillary markers and a scaling factor map for converting the
pixel coordinates of the extracted feature points to physical coordinates. The dynamic experiment
of a three-story scaled model was performed to determine whether the proposed method extracts
highly reliable data in an actual nighttime environment. The dynamic displacement of the model was
measured by the NVDS, general VDS with marker and the LDS (the reference data). The displacement
data obtained through each measurement method were compared and the dynamic characteristics of
the structure obtained from the displacement data were extracted to determine if the proposed method
was suitable for monitoring the structure.
2. Night Vision Displacement Sensor (NVDS)
The night vision displacement sensor (NVDS) proposed in this study adopts the technology for
extracting the displacement data of a structure using the electromagnetic waves of the infrared region
by applying an IR pass filter to a commercial camera. Figure 1shows the technique for extracting the
displacement data using the NVDS. The input data of the NVDS is the infrared ray and the output
data is the time domain displacement data of the structure.
This section consists of three subsections. Each subsection describes the hardware part constituting
the NVDS, the feature point extraction part using image convex hull optimization and the scaling
factor map converting the coordinates of the extracted feature points.
2.1. Hardware Description for Night Vision System
The night vision system consists of a night vision camera, a tripod, a laptop for image processing
and a USB cable needed for data communication. The purpose of the image-based measurement
system using night vision is to measure the displacement of a structure using the VDS even when
the light intensity is low as it is nighttime or due to weather changes. In general, as CCD and CMOS,
which are image sensors used in cameras, can detect the wavelength range (more than 700 nm) in the
infrared region, it is possible to detect the infrared region using a general camera but a red-eye effect
Sensors 2018,18, 4151 4 of 17
occurs when the infrared region is detected using the general camera. To prevent this phenomenon,
an infrared cut-off filter, sometimes called an “IR filter,” is installed between the camera lens and the
image sensor to block the infrared region. In contrast, an IR pass filter, which is generally called an
“infrared filter,” reflects light with less than 760 nm wavelengths and transmits light with more than 760
nm wavelengths. In this study, the hardware of the night vision measurement system, which detects
the infrared region, was configured by removing the existing infrared cut-off filter of a general camera
and installing an infrared filter that transmits wavelengths more than those in the infrared region.
The night vision camera that was used in this study was LifeCam HD-5000. Figure 2shows
a comparison of the eye region images taken with a general camera and those taken with a night vision
camera. All procedures for using Figure 2were explained to the participant and consent was received
to use in advance. As shown in Figure 2, the structure of the iris in the eye region, which was taken
with the night vision camera, can be clearly seen but the structure of the iris cannot be clearly seen
when taken with the general camera. In addition, as the image taken with the night vision camera
received the infrared region, a red-eye effect occurred. As this red-eye phenomenon, however, does not
cause image distortion when the image is converted to a gray-scale image through image processing,
it was verified that the image of the infrared region can be detected through the night vision system.
Sensors 2016, 16, x FOR PEER REVIEW 4 of 4
image sensor to block the infrared region. In contrast, an IR pass filter, which is generally called an
infrared filter,” reflects light with less than 760 nm wavelengths and transmits light with more than
760 nm wavelengths. In this study, the hardware of the night vision measurement system, which
detects the infrared region, was configured by removing the existing infrared cut-off filter of a general
camera and installing an infrared filter that transmits wavelengths more than those in the infrared
region.
The night vision camera that was used in this study was LifeCam HD-5000. Figure 2 shows a
comparison of the eye region images taken with a general camera and those taken with a night vision
camera. All procedures for using Figure 2 were explained to the participant and consent was received
to use in advance. As shown in Figure 2, the structure of the iris in the eye region, which was taken
with the night vision camera, can be clearly seen but the structure of the iris cannot be clearly seen
when taken with the general camera. In addition, as the image taken with the night vision camera
received the infrared region, a red-eye effect occurred. As this red-eye phenomenon, however, does
not cause image distortion when the image is converted to a gray-scale image through image
processing, it was verified that the image of the infrared region can be detected through the night
vision system.
Figure 1. Time domain displacement data acquisition algorithm using night vision optimized
displacement sensor (NVDS).
Figure 1.
Time domain displacement data acquisition algorithm using night vision optimized
displacement sensor (NVDS).
Sensors 2018,18, 4151 5 of 17
Sensors 2016, 16, x FOR PEER REVIEW 5 of 4
Figure 2. Comparison between on images of a person’s eye taken by commercial cameras; (a) LifeCam
HD-5000 (general camera), (b) LifeCam HD-5000 with IR pass filter (night vision camera).
2.2. Image Convex Hull Optimization Method
A gray-scale image is a set of multiple pixels and each pixel has an integer value ranging from 0
to 255, which corresponds to the intensity of the light. The information of a pixel is greatly influenced
by the changes in the external light but it changes according to the shape and structure of an object.
The method of recognizing a human face or an object using the information of the pixel has been
widely used in the field of computer vision. For extracting the displacement data of a structure from
the structure image taken without using a marker, however, or for extracting the displacement data
of the desired position from the image, the method of extracting the displacement data through the
shape information of a structure without using accessories such as a marker is said to be the most
ideal. This study developed an image convex hull optimization method that can extract the same
image convex hull from every frame of the image taken from the structure, so that the displacement
of the structure could be extracted without using any marker.
2.1.1. Image Convex Hull
The image convex hull is a convex polygon corresponding to an area filled with white dots (the
parts represented by 1 in a binary image) due to the image binarization of a gray-scale image into
specific threshold values ranging from 0 to 1, as shown in Figure 3. In the field of computer vision,
various image processing algorithms are used in the image classification method to classify objects
from their recognized feature points and to identify them [37,38]. Among these, the image convex
hull was used in this study because it facilitates the extraction of specific shapes or feature points
from the image and has high computational speed.
Figure 3. Description of image convex hull.
To facilitate the definition of the image convex hull, it is necessary to normalize the gray-scale
image with values ranging from 0 to 255 and to set the pixel value to 01. The reason for the need to
go through the normalizing process is that the range of the threshold values that distinguish the
boundaries of the binary image can be reduced to the range of 0 to 1, not 0 to 255. Then the threshold
value is calculated by analyzing the histogram on the brightness of the image and the image is
Figure 2.
Comparison between on images of a person’s eye taken by commercial cameras; (
a
) LifeCam
HD-5000 (general camera), (b) LifeCam HD-5000 with IR pass filter (night vision camera).
2.2. Image Convex Hull Optimization Method
A gray-scale image is a set of multiple pixels and each pixel has an integer value ranging from 0
to 255, which corresponds to the intensity of the light. The information of a pixel is greatly influenced
by the changes in the external light but it changes according to the shape and structure of an object.
The method of recognizing a human face or an object using the information of the pixel has been
widely used in the field of computer vision. For extracting the displacement data of a structure from
the structure image taken without using a marker, however, or for extracting the displacement data of
the desired position from the image, the method of extracting the displacement data through the shape
information of a structure without using accessories such as a marker is said to be the most ideal. This
study developed an image convex hull optimization method that can extract the same image convex
hull from every frame of the image taken from the structure, so that the displacement of the structure
could be extracted without using any marker.
2.2.1. Image Convex Hull
The image convex hull is a convex polygon corresponding to an area filled with white dots (the
parts represented by 1 in a binary image) due to the image binarization of a gray-scale image into
specific threshold values ranging from 0 to 1, as shown in Figure 3. In the field of computer vision,
various image processing algorithms are used in the image classification method to classify objects
from their recognized feature points and to identify them [
37
,
38
]. Among these, the image convex hull
was used in this study because it facilitates the extraction of specific shapes or feature points from the
image and has high computational speed.
Sensors 2016, 16, x FOR PEER REVIEW 5 of 4
Figure 2. Comparison between on images of a person’s eye taken by commercial cameras; (a) LifeCam
HD-5000 (general camera), (b) LifeCam HD-5000 with IR pass filter (night vision camera).
2.2. Image Convex Hull Optimization Method
A gray-scale image is a set of multiple pixels and each pixel has an integer value ranging from 0
to 255, which corresponds to the intensity of the light. The information of a pixel is greatly influenced
by the changes in the external light but it changes according to the shape and structure of an object.
The method of recognizing a human face or an object using the information of the pixel has been
widely used in the field of computer vision. For extracting the displacement data of a structure from
the structure image taken without using a marker, however, or for extracting the displacement data
of the desired position from the image, the method of extracting the displacement data through the
shape information of a structure without using accessories such as a marker is said to be the most
ideal. This study developed an image convex hull optimization method that can extract the same
image convex hull from every frame of the image taken from the structure, so that the displacement
of the structure could be extracted without using any marker.
2.1.1. Image Convex Hull
The image convex hull is a convex polygon corresponding to an area filled with white dots (the
parts represented by 1 in a binary image) due to the image binarization of a gray-scale image into
specific threshold values ranging from 0 to 1, as shown in Figure 3. In the field of computer vision,
various image processing algorithms are used in the image classification method to classify objects
from their recognized feature points and to identify them [37,38]. Among these, the image convex
hull was used in this study because it facilitates the extraction of specific shapes or feature points
from the image and has high computational speed.
Figure 3. Description of image convex hull.
To facilitate the definition of the image convex hull, it is necessary to normalize the gray-scale
image with values ranging from 0 to 255 and to set the pixel value to 01. The reason for the need to
go through the normalizing process is that the range of the threshold values that distinguish the
boundaries of the binary image can be reduced to the range of 0 to 1, not 0 to 255. Then the threshold
value is calculated by analyzing the histogram on the brightness of the image and the image is
Figure 3. Description of image convex hull.
To facilitate the definition of the image convex hull, it is necessary to normalize the gray-scale
image with values ranging from 0 to 255 and to set the pixel value to 0–1. The reason for the need
to go through the normalizing process is that the range of the threshold values that distinguish the
boundaries of the binary image can be reduced to the range of 0 to 1, not 0 to 255. Then the threshold
Sensors 2018,18, 4151 6 of 17
value is calculated by analyzing the histogram on the brightness of the image and the image is binarized
by setting the value of the pixel with a value higher than the threshold to 1 and that with a value lower
than the threshold to 0.
In the binary image, the convex hull filled with white dots can be defined and as such, the image
convex hull can be obtained based on this. That is, the image convex hull is obtained from the binarized
image obtained from the threshold value and therefore, the shape of the image convex hull can be
determined by adjusting the threshold value [39].
2.2.2. Formulation of Convex Hull Optimization for Extracting Feature Points
As various binary images can be obtained by adjusting the threshold value, a variety of image
convex hulls can be obtained from one image. In this study, we use the centroid of image convex
hull as the feature point and extract the displacement data through the centroid change to extract the
feature point from the image of a structure taken without using a marker. If the same convex hull
can be extracted from each frame by adjusting the threshold value, the centroid of the convex hull
that determines the displacement data of the structure can be precisely extracted. The image convex
hull optimization method was used in this study to find the image convex hull that is the same as the
image convex hull of the reference frame by adjusting the threshold value in every frame. In the study,
the reference frame was selected as the first frame of the image.
To find the same image convex hull as the reference frame in every frame, a measure is needed to
compare the image convex hull extracted from each frame and the image convex hull of the reference
frame. Even though there may be various comparative measures, a quick comparison of similarities
can be done by determining the similarity of the image convex hull of the two frames through the area
of a figure. If the brightness of the light changes drastically, or if severe deformation of a structure
occurs, it is difficult to find the same image convex hull. Images, however, are usually taken at 30
frames per second and the night vision used in this study is not sensitive to the visible light region
and thus has no trouble finding the same convex hull. The validation process of the proposed method
of finding feature points and its results will be discussed in detail in Section 2.2.3. If the difference
between the image convex hull area of the reference frame and the area of the image convex hull in the
i-th frame converges to a certain convergence condition, it is thought that the same image convex hull
was found. Therefore, the image convex hull optimization problem can be defined as follows:
Find ti
Min |A(t1)1A(ti)i|
s.t 0 ti1, i=2, 3, · · · ,n
where,
ti
is the threshold value in i-th frame,
Ai
is the area of image convex hull in i-th frame and n is
number of image frame.
The process of extracting the feature points from the formulated convex hull optimization problem
is shown below.
1.
The region of interest (ROI) of the reference frame is set to determine t
1
, which is a threshold
value for defining the image convex hull for this ROI. The threshold value t
1
of the first frame is
obtained through Otsu’s method [
39
], which can minimize the difference in the numbers of black
and white pixels by analyzing the histogram.
2.
The convergence criterion needed for the image convex hull optimization problem is set. The
convergence criterion can be determined by considering the distance between the camera and the
target, the pixel pitch of the camera and the ROI. In this study, when the convergence criterion
was set to more than 10
9
m
2
, which is the area projected in the image sensor, the errors in the
extraction of the feature points were reduced.
Sensors 2018,18, 4151 7 of 17
3.
The optimization problem is solved by adjusting the t
i
value of the i-th frame. If the area of the
convex hull meets the convergence condition when compared with the value of the reference
frame, the tivalue is stored and then the process is repeated for the next i + 1-th frame.
4.
Once the t
i
value is obtained for all the frames, the image is binarized based on the t
i
value and
a convex hull filled with white dots is obtained in the binarized image. The centroid is calculated
from the obtained convex hull to extract the feature points and the feature point coordinates
(ui,vi) of the i-th frame are stored. The above process is repeated for all the frames.
5.
With respect to the coordinates of the obtained feature points, the relative coordinates (U
i
,V
i
)
are obtained using Equation (1). The obtained relative coordinates are converted to physical
coordinates in pixel units using the scaling factor model that converts the coordinates discussed
in Section 2.3.
(Ui,Vi)=(ui+1,vi+1)(ui,vi)(1)
where the range of the ivalue is from 1 to n 1 and nis the total number of frames.
Through the above process, the feature points of a structure can be extracted without using
a marker and the displacement data can also be extracted therefrom.
2.2.3. Validation of Convex Hull Optimization
As the image convex hull optimization algorithm is a method of extracting the same shape
information in every frame using the area, which is the structural shape information, there is a need to
verify the convergence characteristics, including the setting of the appropriate convergence criterion
for the area. The convergence criterion can be calculated differently depending on the size of the
camera’s image sensor, the distance from the target and the focal length. Considering the focal length,
distance and pixel pitch of the camera that was used in this study, the convergence condition was
determined to be less than 10
×
10 pixels in the difference of the areas between the first frame (the
reference frame) and the corresponding frame. If this convergence condition is converted to the units of
the actual physical coordinates, it will show an area difference of 1.11
×
10
9
m
2
. Therefore, the same
shape information with almost no difference from the area of the reference frame can be extracted.
Figure 4shows the shape information and feature points obtained through the image convex hull
optimization; the gray-scale image in the 40th, 120th and 180th frames, which are arbitrary frames; and
the shape information of the first frame, which is the reference frame. In addition, a graph showing the
convergence of the objective function of the optimization algorithm according to the threshold value
reveals that the objective function converges so that the area difference can be minimized in all the
frames. Based on this results, the convergence of the image convex hull optimization was confirmed
and it was proven that the shape information of the actual structure could be extracted precisely.
Sensors 2016, 16, x FOR PEER REVIEW 7 of 4
from the obtained convex hull to extract the feature points and the feature point coordinates (u
i
,
v
i
) of the i-th frame are stored. The above process is repeated for all the frames.
5. With respect to the coordinates of the obtained feature points, the relative coordinates (U
i
, V
i
)
are obtained using Equation (1). The obtained relative coordinates are converted to physical
coordinates in pixel units using the scaling factor model that converts the coordinates discussed
in Section 2.3.
(,
)=(,
)(,
) (1)
where the range of the i value is from 1 to n 1 and n is the total number of frames.
Through the above process, the feature points of a structure can be extracted without using a
marker and the displacement data can also be extracted therefrom.
2.1.3. Validation of Convex Hull Optimization
As the image convex hull optimization algorithm is a method of extracting the same shape
information in every frame using the area, which is the structural shape information, there is a need
to verify the convergence characteristics, including the setting of the appropriate convergence
criterion for the area. The convergence criterion can be calculated differently depending on the size
of the camera’s image sensor, the distance from the target and the focal length. Considering the focal
length, distance and pixel pitch of the camera that was used in this study, the convergence condition
was determined to be less than 10 × 10 pixels in the difference of the areas between the first frame
(the reference frame) and the corresponding frame. If this convergence condition is converted to the
units of the actual physical coordinates, it will show an area difference of 1.11 × 10
9
m
2
. Therefore,
the same shape information with almost no difference from the area of the reference frame can be
extracted.
Figure 4 shows the shape information and feature points obtained through the image convex
hull optimization; the gray-scale image in the 40th, 120th and 180th frames, which are arbitrary
frames; and the shape information of the first frame, which is the reference frame. In addition, a graph
showing the convergence of the objective function of the optimization algorithm according to the
threshold value reveals that the objective function converges so that the area difference can be
minimized in all the frames. Based on this results, the convergence of the image convex hull
optimization was confirmed and it was proven that the shape information of the actual structure
could be extracted precisely.
Figure 4. Cont.
Sensors 2018,18, 4151 8 of 17
Sensors 2016, 16, x FOR PEER REVIEW 8 of 4
Figure 4. Gray scale image and its convex hull to extract feature point; Convergence of image
convex hull optimization method for specific convergence criteria
The proposed image convex hull optimization algorithm and Lucas-Kanade Optical flow
algorithm [40] are similar in that there is no need for an ancillary marker to obtain the displacement
data from the image. Lucas-Kanade optical flow algorithm is a way to solve the basic optical flow
equation for all the pixels in that neighborhood by the least squares criterion. The definition of optical
flow is apparent motion of brightness patterns, so the optical flow can be said the projection of the
three-dimensional velocity vector on the image. Therefore, the brightness constancy must be
maintained to extract the displacement data of structure using the Lucas-Kanade algorithm. If the
brightness constancy is not maintained, the reliability of the displacement data obtained from the
Lucas-Kanade algorithm is drastically decreased [41]. On the other hand, the proposed image convex
hull optimization technique has an advantage in that it is not sensitive to the change of light
brightness unlike the Lucas-Kanade algorithm, because the feature point is extracted by comparing
the shape information obtained by changing the threshold value of the image sequence with the
predetermined shape information of the reference frame.
2.3. Scaling Factor Model
The scaling factor converts the pixel coordinates of the feature points to physical coordinates.
The scaling factor (SF1) obtained using a marker can be calculated based on the ratio of the actual
marker size to the size of the marker in the image, as follows:
=
(2)
where, is the diameter of a maker in physical plane (mm) and is the diameter of a maker in
the image plane (pixel).
When a marker is not used, the scaling factor (SF2) can be calculated from the distance Z from
the camera and the target, the focal length f and the pixel pitch R of the image sensor, as shown below.
(, ) =
(3)
Ideally, the SF1 calculated from actual size of the marker and the SF2 calculated from the external
condition should yield the same result. In preliminary experiment, it was observed that the error
between SF1 and SF2 increases as the marker was moved away from the center of the image plane.
Due to the characteristics of a pinhole camera, while the distance between the marker and the camera
obtained through the camera varies depending on the position in the image (or the position of the
actual target), the focal length is constant. That is, if the marker or object moves away from the center
of the image plane, the distance Z from the camera increases, which needs to be corrected. In this
regard, the scaling factor map, in which the scaling factor is calculated differently depending on the
pixel coordinates in the image plane of the object, can be obtained using the equations below.
Figure 4.
Gray scale image and its convex hull to extract feature point; Convergence of image convex
hull optimization method for specific convergence criteria
The proposed image convex hull optimization algorithm and Lucas-Kanade Optical flow
algorithm [
40
] are similar in that there is no need for an ancillary marker to obtain the displacement
data from the image. Lucas-Kanade optical flow algorithm is a way to solve the basic optical flow
equation for all the pixels in that neighborhood by the least squares criterion. The definition of
optical flow is apparent motion of brightness patterns, so the optical flow can be said the projection
of the three-dimensional velocity vector on the image. Therefore, the brightness constancy must be
maintained to extract the displacement data of structure using the Lucas-Kanade algorithm. If the
brightness constancy is not maintained, the reliability of the displacement data obtained from the
Lucas-Kanade algorithm is drastically decreased [
41
]. On the other hand, the proposed image convex
hull optimization technique has an advantage in that it is not sensitive to the change of light brightness
unlike the Lucas-Kanade algorithm, because the feature point is extracted by comparing the shape
information obtained by changing the threshold value of the image sequence with the predetermined
shape information of the reference frame.
2.3. Scaling Factor Model
The scaling factor converts the pixel coordinates of the feature points to physical coordinates. The
scaling factor (SF
1
) obtained using a marker can be calculated based on the ratio of the actual marker
size to the size of the marker in the image, as follows:
SF1=D1
d1
(2)
where,
D1
is the diameter of a maker in physical plane (mm) and
d1
is the diameter of a maker in the
image plane (pixel).
When a marker is not used, the scaling factor (SF
2
) can be calculated from the distance Z from the
camera and the target, the focal length f and the pixel pitch R of the image sensor, as shown below.
SF2(cx,cy)=Z
fR(3)
Ideally, the SF
1
calculated from actual size of the marker and the SF
2
calculated from the external
condition should yield the same result. In preliminary experiment, it was observed that the error
between SF
1
and SF
2
increases as the marker was moved away from the center of the image plane.
Due to the characteristics of a pinhole camera, while the distance between the marker and the camera
obtained through the camera varies depending on the position in the image (or the position of the
Sensors 2018,18, 4151 9 of 17
actual target), the focal length is constant. That is, if the marker or object moves away from the center
of the image plane, the distance Zfrom the camera increases, which needs to be corrected. In this
regard, the scaling factor map, in which the scaling factor is calculated differently depending on the
pixel coordinates in the image plane of the object, can be obtained using the equations below.
A(cx ±i,cy ±j)=q(i×SF2(cx ±i1, cy)2+ (j×SF2(cx,cy ±j1)2(4)
Z(cx ±i,cy ±j)=qZ(cx,cy)2+A(cx ±i,cy ±j)22Z(cx,cy)A(cx ±i,cy ±j)cos(θ)(5)
SF M(cx ±i,cy ±j)=Z(cx ±i,cy ±j)
fR(6)
where, Ais the distance, cxand cyis center pixel of image plane and SFM is scaling factor model.
The camera that was used in this study was LifeCam HD-5000, which has 30 fps at the 1280
×
760
resolution and a 24 mm focal length. At the 1280
×
760 resolution, the cx and cy values are 640 and
380, respectively. The R value is a pixel pitch, which is the size of the actual sensor occupied by one
pixel in the image plane and the R value is 3.33
µ
m/pixel when LifeCam HD-5000 is used to make
a video. For more information about scaling factor map, see the reference [42].
3. Shake Table Experimental Outline
The objective of the shake table experiment is to verify the accuracy of the dynamic displacement
of a structure obtained using an NVDS at night. Through a preliminary experiment, the static
displacement was precisely extracted using the NVDS and it was proven through the shake table
experiment that the dynamic displacement and characteristics of a structure could be accurately
measured using the NVDS. A scaled model of three-story one-span shear frames was fabricated
and used in the shake table experiment. The dimensions of the scaled model were 150 mm in the
short-side direction and 250 mm in the long-side direction and the height of each story was 400 mm.
The section that was used in the scaled model was steel SS400. The cross-sectional size of the column
was
5×5 mm
, that of the beam was 4
×
6 mm and that of the brace was 4
×
6 mm. The slab was
made of 5-mm-thick acrylic and was assembled using bolts. Brackets for installing the brace were
welded onto both ends of the column and the brace could be mounted and disassembled using bolts.
The weight of each story was 3.03 kg, including the 2 kg mass plate and the weight of the whole model
was 9.09 kg.
As shown in Figure 5, a brace was installed in the long-side direction of the scaled model to
prevent out-of-plane behavior and a load was applied using a shake table in the short-side direction.
For the load, 50 Hz-bandwidth white noise was used and the amplitude of the load that was applied
for the structure to behave in the elastic section was scaled down based on the analysis of the scaled
model before the experiment. The frequency components of the white noise had 0.0977 Hz intervals
from 0 to 50 Hz and this suggests that the frequency band was sufficient for detecting the dynamic
characteristics of the scaled model.
There are three methods of measuring the dynamic displacement data of the scaled model:
that using an LDS, that using an NVDS and that using a VDS with a marker. The accuracy of the
displacement data obtained from the NVDS at nighttime was evaluated by comparison with the
displacement data obtained using the other methods. In addition, the displacement data obtained
using an LDS was utilized as reference data.
Sensors 2018,18, 4151 10 of 17
Figure 5. Shake table experimental setup of scaled model with dynamic displacement measuring system.
The experiment was conducted at 10:00 p.m., when there was low light intensity. Figure 6shows
a comparison of the photographs taken with LifeCam HD-5000 equipped with an IR pass filter and
those taken with a general camera. As shown in Figure 6, the images taken with the general camera
are dark overall due to insufficient light intensity, whereas those taken with the camera using an IR
pass filter are clear despite the fact that they were taken at a time when there was low light intensity.
That is, in the case of the general camera, noise is generated in the image due to the insufficient light
intensity at night. In the case of the camera to which an IR pass filter was applied, however, as the
image is generated by receiving the wavelengths in the infrared region even at night, less noise is
generated than in the general camera. The image was taken at a 1280
×
760 resolution (100 megapixels)
and at 30 fps. The distance between the camera and the center of the structural image was 1600 mm,
the focal length was 24 mm, the angle was 3
and the scaling factor calculated through the scaling
factor model from the external environment and camera specifications was 0.2219–0.2236 mm/pixel,
with a 5-subpixel level.
Sensors 2018,18, 4151 11 of 17
Sensors 2016, 16, x FOR PEER REVIEW 11 of 4
Figure 6. Real images of scaled model using shake table experiment taken into (a) general LifeCam
HD-5000, (b) LifeCam HD-5000 with IR pass filter, (c) gray-scale image of LifeCam HD-5000 with IR
pass filter.
4. Test Results of Proposed NVDS
This chapter analyzes the reliability of the dynamic displacement obtained from the NVDS. To
analyze the reliability of the dynamic displacement data, the displacement data was directly
compared with that obtained using an LDS (the reference data). In addition, the natural frequencies
and mode shapes of the scaled models were extracted using the obtained displacement data and these
were compared with the analytical values using the commercial program MIDAS-Gen.
4.1. Dynamic Displacement
The dynamic displacement data obtained from the NVDS was directly compared with the
reference data obtained using the LDS to analyze the reliability. Figure 7 shows a time-displacement
graph that compares the displacement data of the scaled model to which white noise had been
applied obtained using the NVDS and that obtained using the LDS. The displacement data of the
scaled model obtained using the NVDS shows that the amplitude and period of the data are in good
agreement with those of the displacement data obtained using the LDS. In addition, it was confirmed
that the NVDS could precisely measure not only the large displacement occurring in a structure
under loading but also the relatively small displacement of about 2 mm that occurs during the free
vibration after loading. On the other hand, the VDS was similar to the LDS in terms of the maximum
and minimum displacements but it failed to extract the displacement data due to the vibration in the
structure’s high-frequency region. This means that in the case of the VDS, the reliability of the
displacement data was low at night, when there was low light intensity, even when a marker ensuring
easy identification was used.
The pixel coordinates were converted to physical coordinates through the scaling factor map
calculated from the distance between the camera and the scaled model, the angle and the focal length.
The scaling factor value at the center of the image was 1.1095 mm/pixel and that at the end of the
image was 1.1180 mm/pixel. In addition, the revolution of the TVDS was 0.22190.2236 mm/pixel
because 1 pixel was divided into 5 subpixels in the existing image in the image processing. If the
subpixel level is increased, the accuracy of the sensor may be improved as the resolution of the NVDS
is lowered. As the number of pixels to be computed increases, however and as the pixel information
of the existing image may be distorted, a moderately regulated level is required. In this experiment,
Figure 6.
Real images of scaled model using shake table experiment taken into (
a
) general LifeCam
HD-5000, (
b
) LifeCam HD-5000 with IR pass filter, (
c
) gray-scale image of LifeCam HD-5000 with IR
pass filter.
4. Test Results of Proposed NVDS
This chapter analyzes the reliability of the dynamic displacement obtained from the NVDS.
To analyze the reliability of the dynamic displacement data, the displacement data was directly
compared with that obtained using an LDS (the reference data). In addition, the natural frequencies
and mode shapes of the scaled models were extracted using the obtained displacement data and these
were compared with the analytical values using the commercial program MIDAS-Gen.
4.1. Dynamic Displacement
The dynamic displacement data obtained from the NVDS was directly compared with the
reference data obtained using the LDS to analyze the reliability. Figure 7shows a time-displacement
graph that compares the displacement data of the scaled model to which white noise had been applied
obtained using the NVDS and that obtained using the LDS. The displacement data of the scaled model
obtained using the NVDS shows that the amplitude and period of the data are in good agreement
with those of the displacement data obtained using the LDS. In addition, it was confirmed that the
NVDS could precisely measure not only the large displacement occurring in a structure under loading
but also the relatively small displacement of about 2 mm that occurs during the free vibration after
loading. On the other hand, the VDS was similar to the LDS in terms of the maximum and minimum
displacements but it failed to extract the displacement data due to the vibration in the structure’s
high-frequency region. This means that in the case of the VDS, the reliability of the displacement data
was low at night, when there was low light intensity, even when a marker ensuring easy identification
was used.
The pixel coordinates were converted to physical coordinates through the scaling factor map
calculated from the distance between the camera and the scaled model, the angle and the focal length.
The scaling factor value at the center of the image was 1.1095 mm/pixel and that at the end of the
image was 1.1180 mm/pixel. In addition, the revolution of the TVDS was 0.2219–0.2236 mm/pixel
because 1 pixel was divided into 5 subpixels in the existing image in the image processing. If the
subpixel level is increased, the accuracy of the sensor may be improved as the resolution of the NVDS
Sensors 2018,18, 4151 12 of 17
is lowered. As the number of pixels to be computed increases, however and as the pixel information of
the existing image may be distorted, a moderately regulated level is required. In this experiment, as
shown in Figure 7, the displacement data obtained using the NVDS was found to secure a sufficient
sensor resolution even when compared with the displacement data obtained using the LDS with
a 0.01 mm resolution.
For reliability analysis, the displacement data obtained from the NVDS was numerically compared
to that obtained from the LDS through root mean square (RMS) analysis and the results are summarized
in Table 1. The RMS difference between the LDS and the NVDS was 0.53% on the first story, 0.24%
on the second story and 2.51% on the third story. Even though the largest error occurred on the third
story, the numerical error between the two data did not exceed 3%. That is, it was confirmed that even
if there are no natural or artificial markers in the structure, highly reliable dynamic displacement can
be obtained using the proposed NVDS.
Sensors 2016, 16, x FOR PEER REVIEW 12 of 4
as shown i n Figur e 7, the displa cement data ob tained using the NVDS was found to secure a sufficient
sensor resolution even when compared with the displacement data obtained using the LDS with a
0.01 mm resolution.
For reliability analysis, the displacement data obtained from the NVDS was numerically
compared to that obtained from the LDS through root mean square (RMS) analysis and the results
are summarized in Table 1. The RMS difference between the LDS and the NVDS was 0.53% on the
first story, 0.24% on the second story and 2.51% on the third story. Even though the largest error
occurred on the third story, the numerical error between the two data did not exceed 3%. That is, it
was confirmed that even if there are no natural or artificial markers in the structure, highly reliable
dynamic displacement can be obtained using the proposed NVDS.
Figure 7. Dynamic displacement comparison between LDS, NVDS and VDS. (a) 1st floor, (b) 2nd
floor, (c) 3rd floor.
Table 1. Displacement data reliability results in the white noise test.
Story RMS (mm)
LDS NVDS VDS
1 7.0681 7.0303 (0.53%) 7.2684 (2.83%)
2 8.6418 8.6625 (0.24%) 8.0935 (6.34%)
3 10.9697 10.6947 (2.51%) 10.1105 (7.83%)
Figure 7.
Dynamic displacement comparison between LDS, NVDS and VDS. (
a
) 1st floor, (
b
) 2nd floor,
(c) 3rd floor.
Table 1. Displacement data reliability results in the white noise test.
Story RMS (mm)
LDS NVDS VDS
1 7.0681 7.0303 (0.53%) 7.2684 (2.83%)
2 8.6418 8.6625 (0.24%) 8.0935 (6.34%)
3 10.9697 10.6947 (2.51%) 10.1105 (7.83%)
4.2. Modal Analysis
If the obtained displacement data can be used to extract the dynamic characteristics of the
structure, it can also be used for system identification or damage assessment. Therefore, it can be
said that it is also important to analyze the reliability of the obtained dynamic characteristics to verify
the performance of the sensor. The dynamic characteristics of the structure, such as the mode shape
and the natural frequency, were extracted by converting the time domain displacement data to the
frequency domain. Figure 8shows a graph that compares the natural frequencies obtained from the
Sensors 2018,18, 4151 13 of 17
LDS, NVDS and VDS. The first natural frequencies obtained from the LDS and NVDS were the same
(3.37 Hz) and the second natural frequencies were also the same (10.89 Hz). The first natural frequency
obtained from the VDS, however, was 3.57 Hz and the second natural frequency was 11.43 Hz, which
were different from those obtained from the LDS and NVDS. Even in the results of the comparison of
the displacement data, as the reliability of the displacement data obtained using the VDS was low, it
was confirmed that for the analysis of a structure’s dynamic characteristics, the reliability of the NVDS,
which extracts the displacement data through the infrared region’s light intensity information, at night
is higher than that of the VDS. The analysis showed that the NVDS could not measure the natural
frequency for the third mode of the structure due to aliasing effect. The aliasing effect means that if the
sampling rate is lower than the actual signal rate, which mean the natural frequency of the structure
in this case, the measured frequencies differs from the actual natural frequencies. Since the sampling
rate of the NVDS is 3.3 ms, it is possible to analyze only the frequency range of up to 15 Hz. The
third natural frequency of the structure obtained using the actual LDS was 16.10 Hz, which exceeded
the measurement range of the NVDS; thus, the third natural frequency could not be extracted using
the NVDS.
To verify the accuracy of the dynamic characteristics obtained from the displacement data,
the scaled model was modeled using MIDAS-Gen, a numerical analysis model and then the natural
frequencies were compared. The results are summarized in Table 2. The difference between the natural
frequencies obtained using the numerical analysis model and those obtained using the LDS and NVDS
was less than 2% but the VDS showed a maximum error of 7.58%. In other words, the NVDS is thought
to precisely extract the first and second natural frequencies of a structure to a level similar to that with
the LDS and it is considered that its performance is equivalent to that of the LDS, which was shown in
the experiment herein to have a high degree of precision.
Sensors 2016, 16, x FOR PEER REVIEW 13 of 4
4.2. Modal Analysis
If the obtained displacement data can be used to extract the dynamic characteristics of the
structure, it can also be used for system identification or damage assessment. Therefore, it can be said
that it is also important to analyze the reliability of the obtained dynamic characteristics to verify the
performance of the sensor. The dynamic characteristics of the structure, such as the mode shape and
the natural frequency, were extracted by converting the time domain displacement data to the
frequency domain. Figure 8 shows a graph that compares the natural frequencies obtained from the
LDS, NVDS and VDS. The first natural frequencies obtained from the LDS and NVDS were the same
(3.37 Hz) and the second natural frequencies were also the same (10.89 Hz). The first natural
frequency obtained from the VDS, however, was 3.57 Hz and the second natural frequency was 11.43
Hz, which were different from those obtained from the LDS and NVDS. Even in the results of the
comparison of the displacement data, as the reliability of the displacement data obtained using the
VDS was low, it was confirmed that for the analysis of a structure’s dynamic characteristics, the
reliability of the NVDS, which extracts the displacement data through the infrared region’s light
intensity information, at night is higher than that of the VDS. The analysis showed that the NVDS
could not measure the natural frequency for the third mode of the structure due to aliasing effect.
The aliasing effect means that if the sampling rate is lower than the actual signal rate, which mean
the natural frequency of the structure in this case, the measured frequencies differs from the actual
natural frequencies. Since the sampling rate of the NVDS is 3.3 ms, it is possible to analyze only the
frequency range of up to 15 Hz. The third natural frequency of the structure obtained using the actual
LDS was 16.10 Hz, which exceeded the measurement range of the NVDS; thus, the third natural
frequency could not be extracted using the NVDS.
To verify the accuracy of the dynamic characteristics obtained from the displacement data, the
scaled model was modeled using MIDAS-Gen, a numerical analysis model and then the natural
frequencies were compared. The results are summarized in Table 2. The difference between the
natural frequencies obtained using the numerical analysis model and those obtained using the LDS
and NVDS was less than 2% but the VDS showed a maximum error of 7.58%. In other words, the
NVDS is thought to precisely extract the first and second natural frequencies of a structure to a level
similar to that with the LDS and it is considered that its performance is equivalent to that of the LDS,
which was shown in the experiment herein to have a high degree of precision.
Figure 8. Comparison of natural frequency between LDS and NVDS and between LDS and VDS. (a)
1st floor, (b) 2nd floor, (c) 3rd floor.
Figure 8.
Comparison of natural frequency between LDS and NVDS and between LDS and VDS.
(a) 1st floor, (b) 2nd floor, (c) 3rd floor.
Sensors 2018,18, 4151 14 of 17
Table 2. Displacement data reliability results in the white noise test.
No. of Mode Analytical Model (Hz) Test Result (Hz)
LDS NVDS VDS
1 3.31 3.37 (1.81%) 3.37 (1.81%) 3.57 (7.58%)
2 11.08 10.89 (1.71%) 10.89 (1.71%) 11.43 (3.16%)
3 17.57 16.10 (8.37%) - -
Figure 9is a graph showing a comparison of the mode shapes obtained using the LDS, NVDS and
VDS and those analyzed using MIDAS. The first-order and second-order mode shapes obtained using
the LDS and NVDS were similar to those of MIDAS but the second-order mode shape obtained using
the VDS was very different. In addition, model assurance criterion (MAC) analysis was performed to
quantitatively analyze the associations between the mode shapes. The closer the MAC value is to 1,
the greater the similarity between the two mode vectors. As showing Figure 10, the MAC analysis
results showed that the measuring technique that can obtain the mode shape most similar to that
obtained using MIDAS was that using NVDS, which obtained the most accurate mode shape although
it had only a slight difference from that obtained using the LDS. In the case of the VDS, as the first-order
mode shape was 0.975, the similarity degree was close to 1. The second-order mode shape, however,
was 0.738, showing a difference from 1 in the degree of similarity. Therefore, it can be said that the
second-order mode shape obtained using the VDS had a low degree of reliability. In other words,
the proposed NVDS is capable of precisely extracting the displacement data of a structure even at
nighttime and is suitable for use in the health monitoring of a structure.
Sensors 2016, 16, x FOR PEER REVIEW 14 of 4
Table 2. Displacement data reliability results in the white noise test.
No. of Mode Analytical Model (Hz) Test Result (Hz)
LDS NVDS VDS
1 3.31 3.37 (1.81%) 3.37 (1.81%) 3.57 (7.58%)
2 11.08 10.89 (1.71%) 10.89 (1.71%) 11.43 (3.16%)
3 17.57 16.10 (8.37%) - -
Figure 9 is a graph showing a comparison of the mode shapes obtained using the LDS, NVDS
and VDS and those analyzed using MIDAS. The first-order and second-order mode shapes obtained
using the LDS and NVDS were similar to those of MIDAS but the second-order mode shape obtained
using the VDS was very different. In addition, model assurance criterion (MAC) analysis was
performed to quantitatively analyze the associations between the mode shapes. The closer the MAC
value is to 1, the greater the similarity between the two mode vectors. As showing Figure 10, the
MAC analysis results showed that the measuring technique that can obtain the mode shape most
similar to that obtained using MIDAS was that using NVDS, which obtained the most accurate mode
shape although it had only a slight difference from that obtained using the LDS. In the case of the
VDS, as the first-order mode shape was 0.975, the similarity degree was close to 1. The second-order
mode shape, however, was 0.738, showing a difference from 1 in the degree of similarity. Therefore,
it can be said that the second-order mode shape obtained using the VDS had a low degree of
reliability. In other words, the proposed NVDS is capable of precisely extracting the displacement
data of a structure even at nighttime and is suitable for use in the health monitoring of a structure.
Figure 9. Mode shape comparisons between analytical model and experimental value (a) 1st mode
shape; (b) 2nd mode shape
Figure 10. Modal assurance criterion analysis results between analytical model and experimental
value; (a) MIDAS and LDS; (b) MIDAS and NVDS; (c) MIDAS and VDS.
5. Conclusions
In this study, a night vision displacement sensor (NVDS) system was proposed to measure the
displacement of structure without any ancillary markers even at night-time. In terms of the hardware,
the key technology of the NVDS is the night vision camera to which an infrared (IR) pass filter was
applied. In terms of the software, its key technology is the image convex hull optimization algorithm,
Figure 9.
Mode shape comparisons between analytical model and experimental value (
a
) 1st mode
shape; (b) 2nd mode shape
Sensors 2016, 16, x FOR PEER REVIEW 14 of 4
Table 2. Displacement data reliability results in the white noise test.
No. of Mode Analytical Model (Hz) Test Result (Hz)
LDS NVDS VDS
1 3.31 3.37 (1.81%) 3.37 (1.81%) 3.57 (7.58%)
2 11.08 10.89 (1.71%) 10.89 (1.71%) 11.43 (3.16%)
3 17.57 16.10 (8.37%) - -
Figure 9 is a graph showing a comparison of the mode shapes obtained using the LDS, NVDS
and VDS and those analyzed using MIDAS. The first-order and second-order mode shapes obtained
using the LDS and NVDS were similar to those of MIDAS but the second-order mode shape obtained
using the VDS was very different. In addition, model assurance criterion (MAC) analysis was
performed to quantitatively analyze the associations between the mode shapes. The closer the MAC
value is to 1, the greater the similarity between the two mode vectors. As showing Figure 10, the
MAC analysis results showed that the measuring technique that can obtain the mode shape most
similar to that obtained using MIDAS was that using NVDS, which obtained the most accurate mode
shape although it had only a slight difference from that obtained using the LDS. In the case of the
VDS, as the first-order mode shape was 0.975, the similarity degree was close to 1. The second-order
mode shape, however, was 0.738, showing a difference from 1 in the degree of similarity. Therefore,
it can be said that the second-order mode shape obtained using the VDS had a low degree of
reliability. In other words, the proposed NVDS is capable of precisely extracting the displacement
data of a structure even at nighttime and is suitable for use in the health monitoring of a structure.
Figure 9. Mode shape comparisons between analytical model and experimental value (a) 1st mode
shape; (b) 2nd mode shape
Figure 10. Modal assurance criterion analysis results between analytical model and experimental
value; (a) MIDAS and LDS; (b) MIDAS and NVDS; (c) MIDAS and VDS.
5. Conclusions
In this study, a night vision displacement sensor (NVDS) system was proposed to measure the
displacement of structure without any ancillary markers even at night-time. In terms of the hardware,
the key technology of the NVDS is the night vision camera to which an infrared (IR) pass filter was
applied. In terms of the software, its key technology is the image convex hull optimization algorithm,
Figure 10.
Modal assurance criterion analysis results between analytical model and experimental value;
(a) MIDAS and LDS; (b) MIDAS and NVDS; (c) MIDAS and VDS.
5. Conclusions
In this study, a night vision displacement sensor (NVDS) system was proposed to measure the
displacement of structure without any ancillary markers even at night-time. In terms of the hardware,
the key technology of the NVDS is the night vision camera to which an infrared (IR) pass filter was
Sensors 2018,18, 4151 15 of 17
applied. In terms of the software, its key technology is the image convex hull optimization algorithm,
which extracts the feature points without using natural markers like bolt holes in a structure or artificial
markers attached to the structure. The reliability of the displacement data obtained using the NVDS
was verified experimentally through an excitation test of a three-story scaled model. The results of the
study are summarized as following.
1.
Night vision camera applying IR pass filter to a commercial camera can extract information in
the infrared and is not greatly affected by visible ray. While the general commercial camera could
not capture the structure of the iris in the eye region, the night vision camera could extract the
information of the iris. Also, the noise of the image taken with the night vision camera was small
during the night but the noise of the image taken with the general camera was large.
2.
The image convex hull optimization algorithm to extract feature points without ancillary markers
was formulated and verified by convergence analysis. The difference in convex hull area obtained
through each image is 1.11
×
10
9
m
2
or less, so this proves that the feature points can be defined
as the centroid of the convex hull in every image frame.
3.
The results of the excitation test of the scaled model at night-time showed that the displacement
data were precisely measured from consecutive image without marker using the NVDS. The
difference in the RMS values between the displacement data obtained using the NVDS and that
obtained using the LDS (the reference data) was 0.24–2.51%. This is acceptably small; hence it
experimentally proves that the dynamic displacement of a structure can be reliably obtained
using an NVDS even at night.
4.
The dynamic characteristics of the structure were analyzed using the displacement data obtained
using the NVDS. The analysis results showed that the NVDS was able to precisely measure
the structure’s natural frequency. According to the MAC analysis, the mode shape obtained
using the NVDS was very similar to the experimental mode shape obtained using the LDS and
the analytical mode shape obtained based on the numerical analysis results. In other words,
the proposed NVDS is capable of precisely measuring not only the displacement data but also
the dynamic behavior of a structure.
The NVDS developed in this study was able to accurately measure the displacement of a desired
part of a structure through the night vision camera and the image convex hull optimization technique
without ancillary marker, even at night-time. In addition, the dynamic characteristics of a structure,
such as the mode shape and the natural frequency, can be accurately measured using the NVDS.
Therefore, the proposed NVDS might be considered as an alternative of existing sensors, especially
suitable for the structural health monitoring with less visible light.
Author Contributions:
I.C. and J.K. provided the main idea for this study; J.K. supervised planning and
conducting the experimental tests and developing the academic article. I.C. designed the experimental method
and analyzed the results; I.C. and J.J. performed the experiments.
Acknowledgments:
This research was supported by a grant(18CTAP-C129738-02) from Technology Advancement
Research Program funded by Ministry of Land, Infrastructure and Transport of Korean and by the Yonsei
University Future-leading Research Initiative of 2016 (2016-22-0063).
Conflicts of Interest: The authors declare no conflict of interest.
References
1.
Oh, B.K.; Hwang, J.W.; Kim, Y.; Cho, T.; Park, H.S. Vision-based system identification technique for building
structures using a motion capture system. J. Sound Vib. 2015,356, 72–85. [CrossRef]
2.
Haber, R.; Unbehauen, H. Structure identification of nonlinear dynamic systems—A survey on input/output
approaches. Automatica 1990,26, 651–677. [CrossRef]
3.
Furukawa, T.; Ito, M.; Izawa, K.; Noori, M.N. System Identification of Base-Isolated Building using Seismic
Response Data. J. Eng. Mech. 2005,131, 268–275. [CrossRef]
Sensors 2018,18, 4151 16 of 17
4.
Khuc, T. Computer Vision Based Structural Identification Framework for Bridge Health Mornitoring; University of
Central Florida: Florida, FL, USA, 2016.
5.
Li, Z.; Feng, M.Q.; Luo, L.; Feng, D.; Xu, X. Statistical analysis of modal parameters of a suspension bridge
based on Bayesian spectral density approach and SHM data. Mech. Syst. Signal Process.
2018
,98, 352–367.
[CrossRef]
6.
Park, H.S.; Kim, J.M.; Choi, S.W.; Kim, Y. A Wireless Laser Displacement Sensor Node for Structural Health
Monitoring. Sensors 2013,13, 13204–13216. [CrossRef] [PubMed]
7.
Çelebi, M. GPS in dynamic monitoring of long-period structures. Soil Dyn. Earthq. Eng.
2000
,20, 477–483.
[CrossRef]
8.
Sensors, Vision, Measurement and Microscope. Available online: http://www.keyence.com/ (accessed on
21 July 2016).
9.
Illingworth, J.; Kittler, J. The Adaptive Hough Transform. IEEE Trans. Pattern Anal. Mach. Intell.
1987
,9,
690–698. [CrossRef] [PubMed]
10.
Canny, J. A Computational Approach to Edge Detection. IEEE Trans. Pattern Anal. Mach. Intell.
1986
,8,
679–698. [CrossRef] [PubMed]
11.
Harris, C.; Stephens, M. A Combined Corner and Edge Detector. In Proceedings of the Alvey Vision
Conference, University of Manchester, Manchester, UK, 31 August–2 September 1988; pp. 147–151.
12.
Lee, J.J.; Shinozuka, M. A vision-based system for remote sensing of bridge displacement. NDT E Int.
2006
,
39, 425–431. [CrossRef]
13.
Brownjohn, J.M.W.; Xu, Y.; Hester, D. Vision-Based Bridge Deformation Monitoring. Front. Built Environ.
2017,3, 1–16. [CrossRef]
14.
Jeong, Y.; Park, D.; Park, K. PTZ Camera-Based Displacement Sensor System with Perspective Distortion
Correction Unit for Early Detection of Building Destruction. Sensors 2017,17, 430. [CrossRef] [PubMed]
15.
Terán, L.; Ordóñez, C.; García-Cortés, S.; Menéndez, A. Detection and magnification of bridge displacements
using video images. In Proceedings of the Optics and Measurement 2016 International Conference, Liberec,
Czech Republic, 11–14 October 2016; Volume 10151, pp. 1015109–1015113.
16.
Choi, H.-S.; Cheung, J.-H.; Kim, S.-H.; Ahn, J.-H. Structural dynamic displacement vision system using
digital image processing. NDT E Int. 2011,44, 597–608. [CrossRef]
17.
Park, H.S.; Kim, J.Y.; Kim, J.G.; Choi, S.W.; Kim, Y. A new position measurement system using
a motion-capture camera for wind tunnel tests. Sensors 2013,13, 12329–12344. [CrossRef] [PubMed]
18.
Lee, J.-J.; Fukuda, Y.; Shinozuka, M. Dynamic displacement measurement of bridges using vision-based
system. Proc. SPIE Smart Struct. Mater. 2006,6174, 1–9.
19.
Olaszek, P. Investigation of the dynamic characteristic of bridge structures using a computer vision method.
Measurement 1999,25, 227–236. [CrossRef]
20.
Park, J.-W.; Lee, J.-J.; Jung, H.-J.; Myung, H. Vision-based displacement measurement method for high-rise
building structures using partitioning approach. NDT E Int. 2010,43, 642–647. [CrossRef]
21.
Feng, D.; Feng, M.Q. Model Updating of Railway Bridge Using In Situ Dynamic Displacement Measurement
under Trainloads. J. Bridg. Eng. 2015,20, 1–12. [CrossRef]
22.
Feng, D.; Feng, M.Q.; Ozer, E.; Fukuda, Y. A vision-based sensor for noncontact structural displacement
measurement. Sensors 2015,15, 16557–16575. [CrossRef] [PubMed]
23.
Pan, B.; Tian, L.; Song, X. Real-time, non-contact and targetless measurement of vertical deflection of bridges
using off-axis digital image correlation. NDT E Int. 2016,79, 73–80. [CrossRef]
24.
Hoult, N.A.; Andy Take, W.; Lee, C.; Dutton, M. Experimental accuracy of two dimensional strain
measurements using Digital Image Correlation. Eng. Struct. 2013,46, 718–726. [CrossRef]
25.
Sieffert, Y.; Vieux-Champagne, F.; Grange, S.; Garnier, P.; Duccini, J.C.; Daudeville, L. Full-field measurement
with a digital image correlation analysis of a shake table test on a timber-framed structure filled with stones
and earth. Eng. Struct. 2016,123, 451–472. [CrossRef]
26.
Pan, B.; Li, K.; Tong, W. Fast, Robust and Accurate Digital Image Correlation Calculation without Redundant
Computations. Exp. Mech. 2013,53, 1277–1289. [CrossRef]
27.
Bartilson, D.T.; Wieghaus, K.T.; Hurlebaus, S. Target-less computer vision for traffic signal structure vibration
studies. Mech. Syst. Signal Process. 2015,60–61, 571–582. [CrossRef]
Sensors 2018,18, 4151 17 of 17
28.
Yoon, H.; Elanwar, H.; Choi, H.; Golparvar-Fard, M.; Spencer, B.F. Target-free approach for vision-based
structural system identification using consumer-grade cameras. Struct. Control Health Monit.
2016
,23,
1405–1416. [CrossRef]
29.
Yoon, H.; Hoskere, V.; Park, J.W.; Spencer, B.F. Cross-Correlation-based structural system identification using
unmanned aerial vehicles. Sensors 2017,17, 2075. [CrossRef] [PubMed]
30.
Yoon, H.; Shin, J.; Spencer, B.F. Structural Displacement Measurement Using an Unmanned Aerial System.
Comput. Civ. Infrastruct. Eng. 2018,33, 183–192. [CrossRef]
31.
Javh, J.; Slaviˇc, J.; Boltežar, M. High frequency modal identification on noisy high-speed camera data.
Mech. Syst. Signal Process. 2018,98, 344–351. [CrossRef]
32.
Hu, Q.; He, S.; Wang, S.; Liu, Y.; Zhang, Z.; He, L.; Wang, F.; Cai, Q.; Shi, R.; Yang, Y. A high-speed target-free
vision-based sensor for bus rapid transit viaduct vibration measurements using CMT and ORB algorithms.
Sensors 2017,17, 1305. [CrossRef] [PubMed]
33.
Chen, J.G.; Wadhwa, N.; Cha, Y.J.; Durand, F.; Freeman, W.T.; Buyukozturk, O. Modal identification of simple
structures with high-speed video using motion magnification. J. Sound Vib. 2015,345, 58–71. [CrossRef]
34.
Kim, K.; Hyun, J.; Jeon, J. Light Emitting Marker for Robust Vision-Based on-the-Spot Bacterial Growth
Detection. Sensors 2017,17, 1459. [CrossRef] [PubMed]
35.
Juric, D.; Loncaric, S. A method for on-road night-time vehicle headlight detection and tracking. In 2014
International Conference on Connected Vehicles and Expo, ICCVE 2014—Proceedings; Institute of Electrical and
Electronics Engineers Inc.: Piscataway Township, NJ, USA, 2015; pp. 655–660.
36.
Dubbelman, G.; Van Der Mark, W.; Van Den Heuvel, J.C.; Groen, F.C.A. Obstacle detection during day and
night conditions using stereo vision. In Proceedings of the 2007 IEEE/RSJ International Conference on
Intelligent Robots and Systems, San Diego, CA, USA, 29 October–2 November 2007; pp. 109–116.
37.
Jayaram, M.A.; Fleyeh, H. Convex Hulls in Image Processing: A Scoping Review. Am. J. Intell. Syst.
2016
,
6, 48–58.
38.
Barber, C.B.; Dobkin, D.P.; Huhdanpaa, H. The quickhull algorithm for convex hulls. ACM Trans. Math. Softw.
1996,22, 469–483. [CrossRef]
39.
Otsu, N. A Threshold Selection Method from Gray-Level Histograms. IEEE Trans. Syst. Man Cybern.
1979
,9,
62–66. [CrossRef]
40.
Lucas, B.D.; Kanade, T. An iterative image registration technique with an application to stereo vision.
In Proceedings of the 7th International Joint Conference on Artificial Intelligence, Vancouver, BS, Canada,
24–28 August 1981; pp. 121–130.
41.
Mathis, A.; Nothwang, W.; Donavanik, D.; Conroy, J.; Shamwell, J.; Robinson, R. Making Optic Flow Robust to
Dynamic Lighting Conditions for Real-Time Operation Making Optic Flow Robust to Dynamic Lighting Conditions
for Real-Time Operation; US Army Research Laboratory: Adelphi, MD, USA, 2016.
42.
Choi, I.; Kim, J.H.; Kim, D. A Target-Less Vision-Based Displacement Sensor Based on Image Convex Hull
Optimization for Measuring the Dynamic Response of Building Structures. Sensors
2016
,16, 2085. [CrossRef]
[PubMed]
©
2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).
... Normalization of gray scale images further allows the measurement of displacement at various positions in a structure using a single camera [13,54]. The MVDS software, which outputs the measured displacement data without markers, incorporates two particular algorithms: 1) an image convex hull optimization method for the extraction of feature points (described in Section 2.1.1), ...
... and 2) a scaling factor model for coordinate conversion to obtain physical displacement values from pixel displacement values (described in Section 2.1.2). With the image convex hull optimization method, feature points are obtained from the shape information of the structures in the image in terms of pixel coordinates [54]. The scaling factor model is then applied to transform the pixel coordinates of the feature points into physical coordinates. ...
Article
Vision-based displacement sensors (VDSs) are drawing attention as next-generation methods for monitoring buildings due to easy installation, however, there is a further need to develop integrated techniques both to obtain displacement at desired positions and to assess dynamic characteristics with high-precision. This study aimed to develop an automated framework to assess the dynamic characteristics of buildings through the derivation of lateral stiffness using a marker-free vision-based displacement sensor (MVDS). The MVDS utilizes image convex hull optimization to measure displacement at user-defined positions without ancillary markers. Then, the dynamic characteristics were estimated from eigenvalue analysis by reconstructing the equation of motion based on lateral stiffness derived through linear regression in load-displacement curves using the measured displacement data. From numerical simulation and shake table test, the results showed that the proposed framework enables monitoring of the dynamic characteristics where frequency domain exceeded the Nyquist frequency of the sensor compared to FFT-based analysis.
... Computer vision is an interdisciplinary scientific field that describes how to use computer models to understand images and videos and build systems similar to the human visual system [19]. Combining machine learning and deep learning with computer vision can improve computer understanding, including simplifying the process of detection and prediction. ...
Article
Full-text available
Observation of the movement of broilers in the chicken coop is done to monitor the welfare and health condition of broilers. Currently, observing broiler flock activity in the chicken coop is generally still done conventionally, with manual observations made by farmers. But on a large scale, this observation method takes a lot of time and manpower, and is subjective. Therefore, an automatic observation system is needed that continuously monitors broiler activity, so as to increase the efficiency of farmer resources and reduce operational costs for observing broiler activity. This study developed an automatic detection and tracking system for broiler chicken movements using You Only Look Once, Version 4 (YOLOv4) as the base model, Yolo Weights as the Transfer Learning Pretrained Model, and Deep Sort as the Tracker Model. For comparison of base models, use Single Shot Multibox Detector (SSD), You Only Look Once, Version 3 (YOLOv3), You Only Look Once, Version 4 - tiny (YOLOv4-tiny). For comparison, the Network Model uses MobileNet and MobileNet v2. For comparison of Transfer Learning Pretrained Model using Caffe Model Weights and Tensorflow Weights. For comparison of Tracker Models using Centroid Tracker, Centroid Kalman Filter Tracker, Simple Online and Realtime Tracking (SORT) and Intersection over Union (IOU). The results showed that Model You Only Look Once, Version 4 (YOLOv4) with Transfer Learning Pretrained Model = Yolo Weights, and Tracker Model = Deep SORT was able to detect and track the most chicken herds in cages compared to others, with the number of broilers detected as many as 17.
... An SHM using a smartphone camera with a laser device was reported by Li et al. [20]. Choi et al. [21] proposed a night vision camera equipped with an IR pass filter to remove the red-eye effect in the infrared region. Digital cameras with LED lights as targets were used for night monitoring as evaluated by Feng et al. [22]. ...
Article
Full-text available
Featured Application Close-range photogrammetry and structural health monitoring of civil infrastructures in challenging lighting environments. Abstract This paper describes an alternative structural health monitoring (SHM) framework for low-light settings or dark environments using underexposed images from vision-based sensors based on the practical implementation of image enhancement algorithms. The proposed framework was validated by two experimental works monitored by two vision systems under ambient lights without assistance from additional lightings. The first experiment monitored six artificial templates attached to a sliding bar that was displaced by a standard one-inch steel block. The effect of image enhancement in the feature identification and bundle adjustment integrated into the close-range photogrammetry were evaluated. The second validation was from a seismic shake table test of a full-scale three-story building tested at E-Defense in Japan. Overall, this study demonstrated the efficiency and robustness of the proposed image enhancement framework in (i) modifying the original image characteristics so the feature identification algorithm is capable of accurately detecting, locating and registering the existing features on the object; (ii) integrating the identified features into the automatic bundle adjustment in the close-range photogrammetry process; and (iii) assessing the measurement of identified features in static and dynamic SHM, and in structural system identification, with high accuracy.
Article
The rapid growth of human population and frequent changes in environmental conditions pose challenges in providing nutrient-rich food to the current population. The demand for poultry products is increasing exponentially as they are a good source of low-cost proteins. Ensuring the well-being of birds and delivering nutritious poultry products is essential for both current and future food security needs and sustainable agriculture. This article is motivated by the potential of emerging digital technologies in agriculture to provide innovative solutions to challenges faced by farmers, particularly in poultry farming. It aims to shed light on various issues encountered by farmers like diseases on poultry farms and offers valuable insights for researchers seeking to address these challenges using the advancements in digital technologies. Precision farming in poultry involves leveraging technologies such as the internet of things, artificial intelligence, and edge computing to enhance animal health management. A smart poultry farm maintains the farm environment and detects diseases at early stage in the chickens. The farm resources are utilized optimally in the smart poultry farm. This research work provides a systematic literature review of intelligent systems designed for (i) poultry birds’ health and welfare management using technologies like the internet of things (IoT), artificial intelligence, cloud computing, and edge computing and (ii) early disease detection, estimation of weight, and feeding behavior of birds with the use of computer vision and vocalization analysis. This article also explores various sensors employed in the development of IoT infrastructure. It has been demonstrated in the literature that modern digital technologies automate the management operations of poultry farms. As a result, the farmer’s income increases, and high-quality food products are available to people at low cost.
Chapter
We all know that as the population grows, there will be a shortage of vegetables due to restricted farming land supplies. If land resources are limited and output is limited, the expanding population’s needs will not be met. As a result, we’ve allocated our resources in such a way that they can meet the demands of an expanding population. With the inclusion of smart farming, there is a need to design a system that can evaluate veggies and notify farmers about their state on a regular basis, so that farmers are aware of their exact status and may take appropriate action. As a result, farmers will be able to predict when to harvest, resulting in reduced vegetation loss. In this paper, we have designed a system that uses computer vision and machine learning algorithm to detect the real-time condition of vegetables over time and take the appropriate steps to prevent vegetable waste.
Article
Near-infrared(NIR) fluorescent materials with high photoluminescent quantum yields(PLQYs) have wide application prospects. Therefore, we design and synthesize a D-A type NIR organic molecule, TPATHCNE, in which triphenylamine and thiophene are utilized as the donors and fumaronitrile is applied as the acceptor. We systematically investigate its molecular structure and photophysical property. TPATHCNE shows high Tg of 110 °C and Td of 385 °C and displays an aggregation-induced emission(AIE) property. A narrow optical bandgap of 1.65 eV is obtained. The non-doped film of TPATHCNE exhibits a high PLQY of 40.3% with an emission peak at 732 nm, which is among the best values of NIR emitters. When TPATHCNE is applied in organic light-emitting diode(OLED), the electroluminescent peak is located at 716 nm with a maximum external quantum efficiency of 0.83%. With the potential in cell imaging, the polystyrene maleic anhydride(PMSA) modified TPATHCNE nanoparticles(NPs) emit strong fluorescence when labeling HeLa cancer cells, suggesting that TPATHCNE can be used as a fluorescent carrier for specific staining or drug delivery for cellular imaging. TPATHCNE NPs fabricated by bovine serum protein(BSA) are cultivated with mononuclear yeast cells, and the intense intracellular red fluorescence indicates that it can be adopted as a specific stain for imaging.
Article
The traditional picture retrieval system has a slow retrieval speed, poor retrieval accuracy, and a low recall when performing massive picture retrieval. In this paper, we design a massive picture retrieval system using the big data image mining technology. It is constructed with data processing layer, business logic layer and presentation layer and works through three steps of data segmentation, mining and merging. For instance, it runs the distributed file system module in a Master/Slave operation mode and designs file read and write requests according to user interaction. Next, it performs parallel computing of picture data sets based on Map Reduce module to solve the picture matching and similarity metrics and returns to the user sorted picture matching result. Then, it extracts the color and texture features of the target area to generate the final picture retrieval result. We select a large number of pictures on a big data platform as simulation test set. The results show that the system we designed has a good retrieval accuracy and a high retrieval speed, which greatly improves the recall of picture retrieval.
Article
Full-text available
The productivity and profitability of poultry farming are crucial to support its affordability issues in food security. Criteria in productivity measurement, including Feed Conversion Ratio (FCR) calculation, whereas economic management is essential for profitability. Hence, best management practices need to be implemented throughout the growth period for optimizing the poultry performance. This review provides a comprehensive overview of computer vision technology for poultry industry research. This review relies on the use of several online databases to identify key works in the area of computer vision in a poultry farm. We recommend our search by focusing on four keywords, ‘computer vision’ and ‘poultry’ or ‘chicken’ or ‘broiler’ that had been published between 2010 and early 2020 with open access provided by University Teknologi Malaysia only. All the selected papers were manually examined and sorted to determine their relevance to computer vision in a poultry farm. We focus on the latest developments by focusing on the hardware and software parts used to analyze the poultry data with some examples of various representative studies on poultry farming. Notably, hardware parts can be classified into camera types, lighting units and camera position, whereas software parts can be categorized into data acquisition and analysis software types as well as data processing and analysis methods that can be implemented into the software types. This paper concludes by highlighting the future works and key challenges that needed to be addressed to assure the quality of this technology prior to the successful implementation of the poultry industry.
Article
When an earthquake strikes an area, unless the local infrastructure has a structural health monitoring system, it is difficult to obtain its health condition due to a lack of data. A smartphone could be competently used for many monitoring tasks to gather large amounts of data at a low cost. In this study, a smartphone is used as a sensor to obtain the interstory drift of buildings by recording a video of the ceiling with the front camera using a feature point matching algorithm. In addition, static and dynamic tests were carried out. In the static tests, the precision and error were investigated and compared with the reference sensor. In the dynamic tests, the displacement under a seismic wave was acquired by a smartphone and compared with the reference sensor, and the illumination condition was varied to test the robustness of the algorithm. The results indicated that feature point matching was suitable for calculating the displacement, and the smartphone was competent in monitoring the interstory drift during an earthquake.
Article
Full-text available
Vibration-based Structural Health Monitoring (SHM) is one of the most popular solutions to assess the safety of civil infrastructure. SHM applications all begin with measuring the dynamic response of structures, but displacement measurement has been limited by the difficulty in requiring a fixed reference point, high cost, and/or low accuracy. Recently, researchers have conducted studies on vision-based structural health monitoring, which provides noncontact and efficient measurement. However, these approaches have been limited to stationary cameras, which have the challenge of finding a location to deploy the cameras with appropriate line-of-sight, especially to monitor critical civil infrastructures such as bridges. The Unmanned Aerial System (UAS) can potentially overcome the limitation of finding optimal locations to deploy the camera, but existing vision-based displacement measurement methods rely on the assumption that the camera is stationary. The displacements obtained by such methods will be a relative displacement of a structure to the camera motion, not an absolute displacement. Therefore, this article presents a framework to achieve absolute displacement of a structure from a video taken from an UAS using the following phased approach. First, a target-free method is implemented to extract the relative structural displacement from the video. Next, the 6 degree-of-freedom camera motion (three translations and three rotations) is estimated by tracking the background feature points. Finally, the absolute structural displacement is recovered by combining the relative structural displacement and the camera motion. The performance of the proposed system has been validated in the laboratory using a commercial UAS. Displacement of a pinned-connected railroad truss bridge in Rockford, IL subjected to revenue-service traffic loading was reproduced on a hydraulic simulator, while the UAS was flown from a distance of 4.6 m (simulating the track clearance required by the Federal Railroad Administration), resulting in estimated displacements with an RMS error of 2.14 mm.
Article
Full-text available
Computer vision techniques have been employed to characterize dynamic properties of structures, as well as to capture structural motion for system identification purposes. All of these methods leverage image-processing techniques using a stationary camera. This requirement makes finding an effective location for camera installation difficult, because civil infrastructure (i.e., bridges, buildings, etc.) are often difficult to access, being constructed over rivers, roads, or other obstacles. This paper seeks to use video from Unmanned Aerial Vehicles (UAVs) to address this problem. As opposed to the traditional way of using stationary cameras, the use of UAVs brings the issue of the camera itself moving; thus, the displacements of the structure obtained by processing UAV video are relative to the UAV camera. Some efforts have been reported to compensate for the camera motion, but they require certain assumptions that may be difficult to satisfy. This paper proposes a new method for structural system identification using the UAV video directly. Several challenges are addressed, including: (1) estimation of an appropriate scale factor; and (2) compensation for the rolling shutter effect. Experimental validation is carried out to validate the proposed approach. The experimental results demonstrate the efficacy and significant potential of the proposed approach.
Article
Full-text available
Simple methods using the striped pattern paper marker and FFT (fast Fourier transformation) have been proposed as alternatives to measuring the optical density for determining the level of bacterial growth. The marker-based method can be easily automated, but due to image-processing-base of the method, the presence of light or the color of the culture broth can disturb the detection process. This paper proposes a modified version of marker-FFT-based growth detection that uses a light emitting diode (LED) array as a marker. Since the marker itself can emit the light, the measurements can be performed even when there is no light source or the bacteria are cultured in a large volume of darkly colored broth. In addition, an LED marker can function as a region of interest (ROI) indicator in the image. We expect that the proposed LED-based marker system will allow more robust growth detection compared to conventional methods.
Article
Full-text available
Bus Rapid Transit (BRT) has become an increasing source of concern for public transportation of modern cities. Traditional contact sensing techniques during the process of health monitoring of BRT viaducts cannot overcome the deficiency that the normal free-flow of traffic would be blocked. Advances in computer vision technology provide a new line of thought for solving this problem. In this study, a high-speed target-free vision-based sensor is proposed to measure the vibration of structures without interrupting traffic. An improved keypoints matching algorithm based on consensus-based matching and tracking (CMT) object tracking algorithm is adopted and further developed together with oriented brief (ORB) keypoints detection algorithm for practicable and effective tracking of objects. Moreover, by synthesizing the existing scaling factor calculation methods, more rational approaches to reducing errors are implemented. The performance of the vision-based sensor is evaluated through a series of laboratory tests. Experimental tests with different target types, frequencies, amplitudes and motion patterns are conducted. The performance of the method is satisfactory, which indicates that the vision sensor can extract accurate structure vibration signals by tracking either artificial or natural targets. Field tests further demonstrate that the vision sensor is both practicable and reliable.
Article
Full-text available
Vibration measurements using optical full-field systems based on high-speed footage are typically heavily burdened by noise, as the displacement amplitudes of the vibrating structures are often very small (in the range of microm-eters, depending on the structure). The modal information is troublesome to measure as the structure's response is close to, or below, the noise level of the camera-based measurement system. This paper demonstrates modal parameter identification for such noisy measurements. It is shown that by using the Least-Squares Complex-Frequency method combined with the Least-Squares Frequency-Domain method, identification at high-frequencies is still possible. By additionally incorporating a more precise sensor to identify the eigenval-ues, a hybrid accelerometer / high-speed camera mode shape identification is possible even below the noise floor. An accelerometer measurement is used to identify the eigenvalues, while the camera measurement is used to produce the full-field mode shapes close to 10 kHz. The identified modal parameters improve the quality of the measured modal data and serve as a reduced model of the structure's dynamics.
Article
Full-text available
Optics-based tracking of civil structures is not new, due to historical application in surveying, but automated applications capable of tracking at rates that capture dynamic effects are now a hot research topic in structural health monitoring. Recent innovations show promise of true non-contacting monitoring capability avoiding the need for physically attached sensor arrays. The paper reviews recent experience using the Imetrum Dynamic Monitoring Station (DMS) commercial optics-based tracking system on Humber Bridge and Tamar Bridge, aiming to show both the potential and limitations. In particular, the paper focuses on the challenges to field application of such a system resulting from camera instability, nature of the target (artificial or structural feature), and illumination. The paper ends with evaluation of a non-proprietary system using a consumer-grade camera for cable vibration monitoring to emphasize the potential for lower cost systems where if performance specifications can be relaxed.
Article
Full-text available
This paper presents a pan-tilt-zoom (PTZ) camera-based displacement measurement system, specially based on the perspective distortion correction technique for the early detection of building destruction. The proposed PTZ-based vision system rotates the camera to monitor the specific targets from various distances and controls the zoom level of the lens for a constant field of view (FOV). The proposed approach adopts perspective distortion correction to expand the measurable range in monitoring the displacement of the target structure. The implemented system successfully obtains the displacement information in structures, which is not easily accessible on the remote site. We manually measured the displacement acquired from markers which is attached on a sample of structures covering a wide geographic region. Our approach using a PTZ-based camera reduces the perspective distortion, so that the improved system could overcome limitations of previous works related to displacement measurement. Evaluation results show that a PTZ-based displacement sensor system with the proposed distortion correction unit is possibly a cost effective and easy-to-install solution for commercialization.
Article
This paper presents a new structural identification (St-Id) framework along with a damage indicator, displacement unit influence surface using computer vision–based measurements for bridge health monitoring. Unit influence surface (UIS) of a certain response (e.g., displacement, strain) at a measurement location on a beam-type or plate-type structure (e.g., single-span or multi-span bridge with its deck) is defined as a response function of the unit load with respect to the any given location of the unit load on that structure. The novel aspect of this paper is a framework integrating vehicle load (input) modeling using computer vision and the development of a new damage indicator, UIS, using image-based structural identification. This framework is demonstrated on the large-scale bridge model in the University of Central Florida Structures Laboratory for verification and validation. The UIS damage indicators successfully identified the simulated damage on the bridge model, including damage detection and damage localization.
Conference Paper
Consistency of image edge filtering is of prime importance for 3D interpretation of image sequences using feature tracking algorithms. To cater for image regions containing texture and isolated features, a combined corner and edge detector based on the local auto-correlation function is utilised, and it is shown to perform with good consistency on natural imagery.
Article
Uncertainty of modal parameters estimation appear in structural health monitoring (SHM) practice of civil engineering to quite some significant extent due to environmental influences and modeling errors. Reasonable methodologies are needed for processing the uncertainty. Bayesian inference can provide a promising and feasible identification solution for the purpose of SHM. However, there are relatively few researches on the application of Bayesian spectral method in the modal identification using SHM data sets. To extract modal parameters from large data sets collected by SHM system, the Bayesian spectral density algorithm was applied to address the uncertainty of mode extraction from output-only response of a long-span suspension bridge. The posterior most possible values of modal parameters and their uncertainties were estimated through Bayesian inference. A long-term variation and statistical analysis was performed using the sensor data sets collected from the SHM system of the suspension bridge over a one-year period. The t location-scale distribution was shown to be a better candidate function for frequencies of lower modes. On the other hand, the burr distribution provided the best fitting to the higher modes which are sensitive to the temperature. In addition, wind-induced variation of modal parameters was also investigated. It was observed that both the damping ratios and modal forces increased during the period of typhoon excitations. Meanwhile, the modal damping ratios exhibit significant correlation with the spectral intensities of the corresponding modal forces.