Natthaphop Phatthamolrat’s research while affiliated with King Mongkut's Institute of Technology Ladkrabang and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (3)


Illustrating the utilization of an industrial robot in an aircraft refueling system.
Equipment of the refueling industrial robot system; (a) fueling nozzle with gripper installed; (b) simulated underwing loading adapter installed above the robot.
DLBVS system utilizes visual feedback from the first camera for picking up a nozzle precisely. It involves defining a desired pose estimation (ξrough*$$ {\upxi}_{\mathrm{rough}}^{\ast } $$ and ξfine*$$ {\upxi}_{\mathrm{fine}}^{\ast } $$) and continuously comparing it with the current pose estimation (ξrough(t)$$ {\upxi}_{\mathrm{rough}}\left(\mathrm{t}\right) $$ and ξfine(t)$$ {\upxi}_{\mathrm{fine}}\left(\mathrm{t}\right) $$) to calculate the error (erough(t)$$ {\mathrm{e}}_{\mathrm{rough}}\left(\mathrm{t}\right) $$ and efine(t)$$ {\mathrm{e}}_{\mathrm{fine}}\left(\mathrm{t}\right) $$).
DLBVS system utilizes visual feedback from the second camera to align and connect a nozzle to an adapter precisely. It involves defining a desired pose estimation (ξrough*$$ {\upxi}_{\mathrm{rough}}^{\ast } $$ and ξfine*$$ {\upxi}_{\mathrm{fine}}^{\ast } $$) and continuously comparing it with the current pose estimation (ξrough(t)$$ {\upxi}_{\mathrm{rough}}\left(\mathrm{t}\right) $$ and ξfine(t)$$ {\upxi}_{\mathrm{fine}}\left(\mathrm{t}\right) $$) to calculate the error (erough(t)$$ {\mathrm{e}}_{\mathrm{rough}}\left(\mathrm{t}\right) $$ and efine(t)$$ {\mathrm{e}}_{\mathrm{fine}}\left(\mathrm{t}\right) $$).
Provides a visual representation of the labeled nozzle image achieved using a two‐step learning method. In Figure 5a, it showcases a feature identified as ‘A,’ while in Figure 5b it illustrates features labeled as ‘A0,’ ‘A1,’ ‘A2,’ ‘A3,’ ‘B0,’ ‘B1,’ ‘B2,’ and ‘B3’.

+20

Deep Learning Based Visual Servo for Autonomous Aircraft Refueling
  • Article
  • Full-text available

March 2025

·

20 Reads

Natthaphop Phatthamolrat

·

·

·

This study develops and evaluates a deep learning based visual servoing (DLBVS) control system for guiding industrial robots during aircraft refueling, aiming to enhance operational efficiency and precision. The system employs a monocular camera mounted on the robot's end effector to capture images of target objects—the refueling nozzle and bottom loading adapter—eliminating the need for prior calibration and simplifying real‐world implementation. Using deep learning, the system identifies feature points on these objects to estimate their pose estimation, providing essential data for precise manipulation. The proposed method integrates two‐stage neural networks with the Efficient Perspective‐n‐Point (EPnP) principle to determine the orientation and rotation angles, while an approximation principle based on feature point errors calculates linear positions. The DLBVS system effectively commands the robot arm to approach and interact with the targets, demonstrating reliable performance even under positional deviations. Quantitative results show translational errors below 0.5 mm and rotational errors under 1.5° for both the nozzle and adapter, showcasing the system's capability for intricate refueling operations. This work contributes a practical, calibration‐free solution for enhancing automation in aerospace applications. The videos and data sets from the research are publicly accessible at https://tinyurl.com/CiRAxDLBVS.

Download

Comparison of data augmentation conditions for YOLO v3 network model.
Performance of deep convolutional neural network approaches and human level in detecting mosquito species

July 2021

·

189 Reads

·

2 Citations

·

·

·

[...]

·

Recently, mosquito-borne diseases have been a significant problem for public health worldwide. These diseases include dengue, ZIKA and malaria. Reducing disease spread stimulates researchers to develop automatic methods beyond traditional surveillance Well-known Deep Convolutional Neural Network, YOLO v3 algorithm, was applied to classify mosquito vector species and showed a high average accuracy of 97.7 per cent. While one-stage learning methods have provided impressive output in Aedes albopictus , Anopheles sinensis and Culex pipiens , the use of image annotation functions may help boost model capability in the identification of other low-sensitivity (< 60 per cent) mosquito images for Cu. tritaeniorhynchus and low-precision Ae. vexans (< 80 per cent). The optimal condition of the data increase (rotation, contrast and blurredness and Gaussian noise) was investigated within the limited amount of biological samples to increase the selected model efficiency. As a result, it produced a higher potential of 96.6 percent for sensitivity, 99.6 percent for specificity, 99.1 percent for accuracy, and 98.1 percent for precision. The ROC Curve Area (AUC) endorsed the ability of the model to differentiate between groups at a value of 0.985. Inter-and intra-rater heterogeneity between ground realities (entomological labeling) with the highest model was studied and compared to research by other independent entomologists. A substantial degree of near-perfect compatibility between the ground truth label and the proposed model (k = 0.950±0.035) was examined in both examinations. In comparison, a high degree of consensus was assessed for entomologists with greater experience than 5-10 years (k = 0.875±0.053 and 0.900±0.048). The proposed YOLO v3 network algorithm has the largest capacity for support-devices used by entomological technicians during local area detection. In the future, introducing the appropriate network model based methods to find qualitative and quantitative information will help to make local workers work quicker. It may also assist in the preparation of strategies to help deter the transmission of arthropod-transmitted diseases.


Study sites used for the sample collection. The map was originally obtained from https://upload.wikimedia.org/wikipedia/commons/a/ab/Thailand_Bangkok_locator_map.svg with license https://creativecommons.org/licenses/by/3.0/deed.en and was modified by using free version of Adobe Illustrator CC 2017 software.
Sample categories. Wild-caught mosquitoes include Aeg_F (Ae. aegypti female), Aeg_M (Ae. aegypti male), Alb_F (Ae. albopictus female), Ars_F (Armigeres subalbatus female), And_F (An. dirus female), And_M (An. dirus male), Cuv_F (Cu. vishnui female), Cuq_F (Cu. quinquefasciatus female), Cuq_M (Cu. quinquefasciatus male), Cug_F (Cu. gelidus female), Maa_F (Mansonia annularis female), Mau_F (Ma. uniformis female) and Mai_F (Ma. indiana female). All photographs were taken by Veerayuth Kittichai, the first author in the manuscript, at Bangkok area, Thailand.
Workflow for data handling in the end-to-end neural network model, which consisted of two learning strategies, namely, the one-stage and two-stage learning methods. (1) The one-stage learning method progressed along the dark-blue dashed line, starting from the ground-truth labelling for the “genus, species and relative gender of the insect”. The ground truth labels, indicated in red rectangles, were trained in the [model] architecture. The output was displayed, pertaining to the correct relative genus, species and gender, in the output box or red rectangle, if the trained weight reached the optimal value. Under the CiRA CORE platform, the red rectangular box of the output could be selected to display or not display the value. (2) The two-stage learning method progressed along the light-blue dashed line. The start point corresponded to the ground-truth labelling for the mosquitoes and non-mosquitoes before performing the training using the [model_1] architecture, indicated in the red rectangle. The optimal trained weight was validated if could correctly distinguish between the non-mosquito and mosquito testing images. Later, the images in the set were cropped using one of the functions under the CiRA CORE programme, to be used as the dataset for the second learning process implemented using [Model_2] after labelling each cropped image pertaining to the relative genus, species and gender, as indicated in the yellow rectangle. The output could be displayed in two rectangular (red and yellow) boxes; the first box corresponded to the mosquito detection, and the second box corresponded to the classification of the relative genus, species and gender of the mosquito. Under the CiRA CORE platform, both the yellow and red rectangular boxes for the output could be selected to display or not display the values.
ROC curve and average AUC for each model and the threshold probability of the learning method with the YOLO network models. (a,b) Correspond to the one-stage and two-stage methods of tiny YOLO v2, respectively. (c,d) Correspond to the one-stage and two-stage methods of YOLO v2, respectively. (e,f) correspond to the one-stage and two-stage methods of YOLO v3, respectively.
Deep learning approaches for challenging species and gender identification of mosquito vectors

March 2021

·

1,457 Reads

·

72 Citations

Microscopic observation of mosquito species, which is the basis of morphological identification, is a time-consuming and challenging process, particularly owing to the different skills and experience of public health personnel. We present deep learning models based on the well-known you-only-look-once (YOLO) algorithm. This model can be used to simultaneously classify and localize the images to identify the species of the gender of field-caught mosquitoes. The results indicated that the concatenated two YOLO v3 model exhibited the optimal performance in identifying the mosquitoes, as the mosquitoes were relatively small objects compared with the large proportional environment image. The robustness testing of the proposed model yielded a mean average precision and sensitivity of 99% and 92.4%, respectively. The model exhibited high performance in terms of the specificity and accuracy, with an extremely low rate of misclassification. The area under the receiver operating characteristic curve (AUC) was 0.958 ± 0.011, which further demonstrated the model accuracy. Thirteen classes were detected with an accuracy of 100% based on a confusion matrix. Nevertheless, the relatively low detection rates for the two species were likely a result of the limited number of wild-caught biological samples available. The proposed model can help establish the population densities of mosquito vectors in remote areas to predict disease outbreaks in advance.

Citations (2)


... There has also been an increasing use of ANN or DL architectures for solving entomological problems (Høye et al., 2021;Peng and Wang, 2022;Tuda and Luna-Maldonado, 2020). For example, CNNs have been applied to detect defects in butterfly pupae (Montellano, 2019), to discriminate the sex of Silkworm pupae (Tao et al., 2019), to distinguish mosquito vector species (Jomtarak et al., 2021;Joshi and Miller, 2021), to count fruit flies on field traps (She et al., 2022), to monitor edible insect rearing (Majewski et al., 2022), and to predict and recognize pest incidence in stored commodities and field crops (Barboza da Silva et al., 2021;Cheng et al., 2017;De Cesaro Júnior et al., 2022;Grünig et al., 2021;Thenmozhi and Reddy, 2019). In view of the potential offered by the combination of radiography and CNNs to classify mass-reared parasitized pupae of fruit flies, the purpose of this study was (1) to verify if it is possible to discriminate parasitized from unparasitized pupae through X-ray images, and (2) to test the suitability of 7 CNN-based neural architectures to classify the fruit fly pupae parasitized by the wasp D. longicaudata. ...

Reference:

Automatic classification of parasitized fruit fly pupae from X-ray images by convolutional neural networks
Performance of deep convolutional neural network approaches and human level in detecting mosquito species

... Despite their efficiency, the performance of machine learning models can be influenced by dataset quality, feature variety, and environmental unpredictability (Kittichai et al., 2021). Machine learning, especially deep learning models such as CNNs and SVMs, is essential for gender recognition through the analysis of biometric patterns and structural characteristics for automated classification. ...

Deep learning approaches for challenging species and gender identification of mosquito vectors