Kyukwang Kim’s research while affiliated with Korea Advanced Institute of Science and Technology and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (15)


Figure 4. (a) Mosquitoes lured by the sensor nodes. (b) Boxplot showing the number of mosquitoes observed in the recorded video. Ten random screenshots from 10 videos (trials) were used.
A summary of the proposed network architecture based on the Fully Convolutional Network (FCN).
A Deep Learning-Based Automatic Mosquito Sensing and Control System for Urban Mosquito Habitats
  • Article
  • Full-text available

June 2019

·

1,026 Reads

·

35 Citations

Kyukwang Kim

·

·

Hyeongkeun Kim

·

[...]

·

Mosquito control is important as mosquitoes are extremely harmful pests that spread various infectious diseases. In this research, we present the preliminary results of an automated system that detects the presence of mosquitoes via image processing using multiple deep learning networks. The Fully Convolutional Network (FCN) and neural network-based regression demonstrated an accuracy of 84%. Meanwhile, the single image classifier demonstrated an accuracy of only 52%. The overall processing time also decreased from 4.64 to 2.47 s compared to the conventional classifying network. After detection, a larvicide made from toxic protein crystals of the Bacillus thuringiensis serotype israelensis bacteria was injected into static water to stop the proliferation of mosquitoes. This system demonstrates a higher efficiency than hunting adult mosquitos while avoiding damage to other insects.

Download

Adaptive Planar Vision Marker Composed of LED Arrays for Sensing Under Low Visibility

January 2019

·

43 Reads

·

2 Citations

Advances in Intelligent Systems and Computing

In image processing and robotic applications, two-dimensional (2D) black and white patterned planar markers are widely used. However, these markers are not detectable in low visibility environment and they are not changeable. This research proposes an active and adaptive marker node, which displays 2D marker patterns using light emitting diode (LED) arrays for easier recognition in the foggy or turbid underwater environments. Because each node is made to blink at a different frequency, active LED marker nodes were distinguishable from each other from a long distance without increasing the size of the marker. We expect that the proposed system can be used in various harsh conditions where the conventional marker systems are not applicable because of low visibility issues. The proposed system is still compatible with the conventional marker as the displayed patterns are identical.


Autoencoder-Combined Generative Adversarial Networks for Synthetic Image Data Generation and Detection of Jellyfish Swarm

September 2018

·

159 Reads

·

37 Citations

IEEE Access

Image-based sensing of jellyfish is important as they can cause great damage to the fisheries and seaside facilities and need to be properly controlled. In this research, we present a deep-learning-based technique to generate a synthetic image of the jellyfish easily with autoencoder-combined generative adversarial networks. The proposed system can easily generate simple images with smaller number of datasets compared to other generative networks. The generated output showed high similarity with the real image dataset. The application using a fully convolutional network and regression network to estimate the size of the jellyfish swarm was also demonstrated, and showed high accuracy during the estimation test.


Figure 1. The processing flow of the MineLoC: (a) blueprint image with the different colors describes the function of the block; (b) block structure in the game world is generated based on the blueprint; (c) block structure is extracted using a freeware program; (d) 3D model is prepared for additive manufacturing; and (e) master template is printed using 3D printer. 
Table 1 . Summarized features of the other modeling software and the proposed method.
Table 2 . Accuracy comparison between other voxel-based software and the proposed method.
MineLoC: A Rapid Production of Lab-on-a-Chip Biosensors Using 3D Printer and the Sandbox Game, Minecraft

June 2018

·

261 Reads

·

6 Citations

Here, MineLoC is described as a pipeline developed to generate 3D printable models of master templates for Lab-on-a-Chip (LoC) by using a popular multi-player sandbox game “Minecraft”. The user can draw a simple diagram describing the channels and chambers of the Lab-on-a-Chip devices with pre-registered color codes which indicate the height of the generated structure. MineLoC converts the diagram into large chunks of blocks (equal sized cube units composing every object in the game) in the game world. The user and co-workers can simultaneously access the game and edit, modify, or review, which is a feature not generally supported by conventional design software. Once the review is complete, the resultant structure can be exported into a stereolithography (STL) file which can be used in additive manufacturing. Then, the Lab-on-a-Chip device can be fabricated by the standard protocol to produce a Lab-on-a-Chip. The simple polydimethylsiloxane (PDMS) device for the bacterial growth measurement used in the previous research was copied by the proposed method. The error calculation by a 3D model comparison showed an accuracy of 86%. It is anticipated that this work will facilitate more use of 3D printer-based Lab-on-a-Chip fabrication, which greatly lowers the entry barrier in the field of Lab-on-a-Chip research.



Figure 4. Determination of growth inhibition concentration. Time series FFT regional analysis is performed in the antibiotic gradient generating device microscopy image for MIC determination. Growth is inhibited in the high-concentration region, while growth of bacteria is observable in the low-concentration region. 
Figure 5. Deep Neural Network (DNN)-based growth estimation. (a) Data flow and network structure of the Deep Neural Network-based growth estimation; (b) Results of the raw FFT measurement and DNN-based regression result. 
Specifications of the main layers of the deep neural networks used. The number next to the conv layers refers to convolution filter and kernel size. The number next to pool layers refers to kernel and stride size. The number next to the fc layers refers to innerproduct size. 
Comparison between cell concentrations estimated by the number of colonies formed during the CFU measurement and calculated FFT score of different samples sources. 
Visual Estimation of Bacterial Growth Level in Microfluidic Culture Systems

February 2018

·

312 Reads

·

30 Citations

Microfluidic devices are an emerging platform for a variety of experiments involving bacterial cell culture, and has advantages including cost and convenience. One inevitable step during bacterial cell culture is the measurement of cell concentration in the channel. The optical density measurement technique is generally used for bacterial growth estimation, but it is not applicable to microfluidic devices due to the small sample volumes in microfluidics. Alternately, cell counting or colony-forming unit methods may be applied, but these do not work in situ; nor do these methods show measurement results immediately. To this end, we present a new vision-based method to estimate the growth level of the bacteria in microfluidic channels. We use Fast Fourier transform (FFT) to detect the frequency level change of the microscopic image, focusing on the fact that the microscopic image becomes rough as the number of cells in the field of view increases, adding high frequencies to the spectrum of the image. Two types of microfluidic devices are used to culture bacteria in liquid and agar gel medium, and time-lapsed images are captured. The images obtained are analyzed using FFT, resulting in an increase in high-frequency noise proportional to the time passed. Furthermore, we apply the developed method in the microfluidic antibiotics susceptibility test by recognizing the regional concentration change of the bacteria that are cultured in the antibiotics gradient. Finally, a deep learning-based data regression is performed on the data obtained by the proposed vision-based method for robust reporting of data.



Fig. 1. The principle of ECF reactor
Fig. 11. Comparison of surface water (a) after application of the ECF system (b) concentrated flocs of cyanobacteria 
Development of Algal Bloom Removal System Using Unmanned Aerial Vehicle and Surface Vehicle

October 2017

·

691 Reads

·

58 Citations

IEEE Access

Recently, owing to changes in weather conditions, cyanobacterial blooms, also known as harmful algal blooms (HABs) have caused serious damage to the ecosystems of rivers and lakes by producing cyanotoxins. In this paper, for the removal of HABs, an algal bloom removal robotic system (ARROS) is proposed. ARROS has been designed with a catamaran-type unmanned surface vehicle (USV) and an algae-removal device attached below. In addition, electrical control systems and a guidance, navigation, and control (GNC) system are implemented on the ARROS to remove the algal bloom autonomously. Moreover, to increase the efficiency of the work, an unmanned aerial vehicle (UAV) is utilized and the system detects algal blooms with an image-based detection algorithm, which is known as a local binary pattern (LBP). The overall mission begins with a command from a server when the UAV detects an algal bloom, and the USV follows the given path autonomously generated by a coverage path planning algorithm. Subsequently, with an electrocoagulation and floatation (ECF) reactor under the USV, HABs are removed. The performance of the algal bloom detection and HABs removal is verified through outdoor field tests in Daecheong Dam, South Korea.



Figure 1. Illustration explaining basic principles of the bacterial growth measurement methods (a) optical density calculation using light absorbance, (b) FFT spectrum measurement using an image of the striped pattern marker over the culture vessel, (c) the proposed method using an LED array as a marker. 
Figure 4. Growth measurement results comparison of the proposed LED array marker and printed striped pattern marker of the previous research in various visibility conditions. Red/green square box indicates the ROI (marked with the letter ROI) and the FFT pattern calculated from the ROI (marked with the number). The written number next to the spectrum indicates nonzero pixel size (FFT count). Results of before incubation (0 h) and after the incubation (4 h) were compared at conditions with (a) stripe pattern marker with the LB broth, (b) stripe pattern marker with the TSB broth, (c) LED marker with the LB broth, (d) LED marker with the TSB broth, and (e) LED marker with the TSB broth without the lightings. 
Figure 5. Plot showing the mean and standard error values of experiments done using bright broth (a) and dark broth (b). (n = 5) Data points having a value of 0 were converted to 0.5 during plotting for better visibility. The y axis indicates the number of nonzero pixels in the FFT spectrum images. 
Light Emitting Marker for Robust Vision-Based On-The-Spot Bacterial Growth Detection

June 2017

·

48 Reads

·

3 Citations

Simple methods using the striped pattern paper marker and FFT (fast Fourier transformation) have been proposed as alternatives to measuring the optical density for determining the level of bacterial growth. The marker-based method can be easily automated, but due to image-processing-base of the method, the presence of light or the color of the culture broth can disturb the detection process. This paper proposes a modified version of marker-FFT-based growth detection that uses a light emitting diode (LED) array as a marker. Since the marker itself can emit the light, the measurements can be performed even when there is no light source or the bacteria are cultured in a large volume of darkly colored broth. In addition, an LED marker can function as a region of interest (ROI) indicator in the image. We expect that the proposed LED-based marker system will allow more robust growth detection compared to conventional methods.


Citations (12)


... However, instrumentation and requirements of fluorescent labeling render this approach expensive and complex. Recently, Kim et al. [30] proposed a deep learning-based automatic mosquito sensing and control system which uses image processing powered by multiple deep learning networks to detect mosquito presence. Upon successful detection, the system automatically injects a larvicide, typically derived from toxic protein crystals of B. thuringiensis israelensis bacteria, into stagnant water bodies to prevent mosquito breeding. ...

Reference:

A 3D printed device for vibration-assisted separation of different-stage mosquito larvae
A Deep Learning-Based Automatic Mosquito Sensing and Control System for Urban Mosquito Habitats

... In particular, compared to the above approaches, generative adversarial networks (GANs) are able to learn more abundant data distributions and create additional, more realistic data in an unsupervised learning manner [25]. It adopts the adversarial learning strategy in the training process and has been successfully used to deal with data imbalance problems in various fields, such as image classification [26], pearl classification [27], and face recognition [28]. In addition, GANs have shown significant potential in the application of class imbalance in wind turbine fault diagnosis. ...

Autoencoder-Combined Generative Adversarial Networks for Synthetic Image Data Generation and Detection of Jellyfish Swarm

IEEE Access

... In engineering, Minecraft can supplement CAD models and physical miniatures [14] that students could complete prior to COVID-19. Educators can create realistic scenarios such as analyzing a defective machine [14], developing factory logistics, designing virtual vehicles, programming calculators or AI [24,29], and building functioning computers (RAM, CPU, ALU, GPU) [8], drawbridges, advanced architecture [1], flying machines [9] and even lab-on-chip sensors then produced by a 3D printer [15]. This is due to Minecraft's "redstone" mechanic, which acts as the in-game electric circuitry system [8,9]. ...

MineLoC: A Rapid Production of Lab-on-a-Chip Biosensors Using 3D Printer and the Sandbox Game, Minecraft

... Therefore, our proposed double-sided emission micro-LED array could play an important role in 6G network by integrating air/underwater OWC networks because it can transmit light in both directions at the air-water interface to avoid medium crossing. In addition, the parallel configuration of the green micro-LED array can enhance the light output power (LOP) and achieve longer transmission distance, allows some pixels to continue to work while others fail, thus maintaining uninterrupted communication, and has the potential of underwater display or as a point indicator for underwater robot docking in an underwater environment with low visibility [13]. ...

Adaptive Planar Vision Marker Composed of LED Arrays for Sensing Under Low Visibility
  • Citing Chapter
  • January 2019

Advances in Intelligent Systems and Computing

... The potential progress of antibiotic susceptibility tests by using a micro-loading chip has been described 118,119,123 , which enables the automatic analysis of morphological changes in single bacteria under various antimicrobial conditions 129 . Recently, microfluidic chips have been used to generate a gradient of antibiotic concentration to determine the minimal inhibitory concentration 130,131 . The combination of highresolution imaging modalities with the fine fluidic controller will facilitate the development of an automated highthroughput monitoring system. ...

Visual Estimation of Bacterial Growth Level in Microfluidic Culture Systems

... Para esto, se han utilizado Can-Satellite, que pesa menos de 700 gramos y está equipado con una cámara RGB y con una cámara frontal de infrarrojo; también cuenta con sensores adicionales de vuelo e interfaces de redes, las cuales transmiten la información y coordinan el reconocimiento del sobreviviente detectado. Así mismo, tiene una profunda arquitectura en la red neuronal, cuya función es clasificar los objetos humanos en imágenes transmitidas (Kim, Hyun, & Myung, 2017). Así las cosas, se observa el creciente uso de los vehículos aéreos no tripulados para no solo reducir costos sino también para incrementar los niveles de acción, según la situación lo amerite; se pueden tomar acciones más acordes al problema, ya que las imágenes que transmite son en tiempo real. ...

Development of aerial image transmitting sensor platform for disaster site surveillance
  • Citing Conference Paper
  • October 2017

... Other UAV platforms included Aytges (2%) (Cillero , FireFLY BirdsEyeView (2%) (Choo et al., 2018), Begren RC (2%) (Becker et al., 2019), G4 SkyCrane (2%) (Pokrzywinski et al., 2022), ATI AgBot (2%) (Arango & Nairn, 2019), 3DR Solo(2%) (Morgan et al., 2020) and Remo-M (2%) . Additionally, 23% of the reviewed studies combined UAV and satellite acquired data from sensors which include Sentinel-2 & 3, Landsat 7, 8 & 9, PlanetScope, GF-1 (Gaofen-1), Orbita Hyperspectral Satellite (OHS) and ZY-3 satellite (Jung et al., 2017;Cillero Castro et al., 2020;El-Alem et al., 2021;Fu et al., 2023;Yang et al., 2023). This synergistic approach enhances spatial and temporal coverage, improves data accuracy, and allows for continuous monitoring, event detection and more comprehensive analysis. ...

Development of Algal Bloom Removal System Using Unmanned Aerial Vehicle and Surface Vehicle

IEEE Access

... Especially in the case of the structure, 24-h monitoring is essential but because the input data of the VDS is the wavelength of the visible light region, the VDS has a disadvantage: the reliability of the data is drastically lowered at night, when the visible ray is lacking. Research using LED has been conducted to overcome this disadvantage [34] but it poses problems such as reduction of the accuracy of the displacement data due to the LED light leaks, in addition to the aforementioned problem caused by the marker. ...

Light Emitting Marker for Robust Vision-Based On-The-Spot Bacterial Growth Detection

... Moreover, the results were compared with previously reported data using a conventionally fabricated chip of the same chip design. Kim et al. [21,22], in the previous report, proposed a vision-based method to measure bacterial The result showed that the area of the non-overlapping region (error) was 2965 pixels while the area of the total region is 21,100 pixels. The accuracy of 86% was achieved by the proposed method. ...

Vision Marker-Based In Situ Examination of Bacterial Growth in Liquid Culture Media

... However, published results suggest that researchers achieve an enhanced navigation precision with the fusion of data from inertial sensors, visual motion detection data, and ranging sensors. 4,5 In this article, we propose a modified extended Kalman filter (M-EKF) that allows fusing the most accurate data from sensors such as optical flow sensor (camera), accelerometer, gyroscope, and a barometric altitude sensor to estimate UAV position in GPS-denied areas. ...

Calibration of the drift error in GPS using optical flow and fixed reference station
  • Citing Conference Paper
  • October 2015