Wensheng Wang’s research while affiliated with National Bank for Agriculture and Rural Development and other places
What is this page?
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
This study examined the socioeconomic factors that influence the adoption of information and communication technologies (ICTs) in enhancing farm productivity among farmers in Ba Province, Fiji. A structured questionnaire survey was administered to a sample of 320 randomly selected farmers across the province’s 16 mainland districts. The analysis demonstrated that, although farmers possessed conventional ICTs, there was no direct correlation between ownership and actual utilisation. Significant determinants affecting ICT use were identified as education, experience, type of farming, and business model. These findings underscore critical implications for both policy and theoretical frameworks, emphasising essential factors to consider in the implementation of ICT solutions for agricultural practitioners.
SLAM (Simultaneous Localization and Mapping) is an efficient method for robot to percept surrendings and make decisions, especially for robots in agricultural scenarios. Perception and path planning in an automatic way is crucial for precision agriculture. However, there are limited public datasets to implement and develop robotic algorithms for agricultural environments. Therefore, we collected dataset “GrapeSLAM”. The ``GrapeSLAM'' dataset comprises video data collected from vineyards to support agricultural robotics research. Data collection involved two primary methods: (1) unmanned aerial vehicle (UAV) for capturing videos under different illumination conditions, and (2) trajectories of the UAV during each flight collected by RTK and IMU. The UAV used was Phantom 4 RTK, equipped with a high resolution camera, flying at around 1 to 3 meters above ground level.
This study aims to explore body weight estimation for cattle in both standing and lying postures, using 3D data. We apply a Unmanned Aerial Vehicle-based (UAV-based) LiDAR system to collect data during routine resting periods between feedings in the natural husbandry conditions of a commercial farm, which ensures minimal interruption to the animals. Ground truth data are obtained by weighing cattle as they voluntarily pass an environmentally embedded scale. We have developed separate models for standing and lying postures and trained them on features extracted from the segmented point clouds of cattle with unique identifiers (UIDs). The models for standing posture achieve high accuracy, with a best-performance model, Random Forest, obtaining an R2 of 0.94, an MAE of 4.72 kg, and an RMSE of 6.33 kg. Multiple linear regression models are trained to estimate body weight for the lying posture, using volume- and posture-wise characteristics. The model used 1 cm as the thickness of the slice-wise volume calculation, achieving an R2 of 0.71, an MAE of 7.71 kg, and an RMSE of 9.56 kg. These results highlight the potential of UAV-based LiDAR data for accurate and non-intrusive estimation of cattle body weight in lying and standing postures, which paves the way for improved management practices in precision livestock farming.
The effectiveness of supervised ML heavily depends on having a large, accurate, and diverse annotated dataset, which poses a challenge in applying ML for yield prediction. To address this issue, we developed a self-training random forest algorithm capable of automatically expanding the annotated dataset. Specifically, we trained a random forest regressor model using a small amount of annotated data. This model was then utilized to generate new annotations, thereby automatically extending the training dataset through self-training. Our experiments involved collecting data from over 30 winter wheat varieties during the 2019–2020 and 2021–2022 growing seasons. The testing results indicated that our model achieved an R² of 0.84, RMSE of 627.94 kg/ha, and MAE of 516.94 kg/ha in the test dataset, while the validation dataset yielded an R² of 0.81, RMSE of 692.96 kg/ha, and MAE of 550.62 kg/ha. In comparison, the standard random forest resulted in an R² of 0.81, RMSE of 681.02 kg/ha, and MAE of 568.97 kg/ha in the test dataset, with validation results of an R² of 0.79, RMSE of 736.24 kg/ha, and MAE of 585.85 kg/ha. Overall, these results demonstrate that our self-training random forest algorithm is a practical and effective solution for expanding annotated datasets, thereby enhancing the prediction accuracy of ML models in winter wheat yield forecasting.
UAVs equipped with various sensors offer a promising approach for enhancing orchard management efficiency. Up-close sensing enables precise crop localization and mapping, providing valuable a priori information for informed decision-making. Current research on localization and mapping methods can be broadly classified into SfM, traditional feature-based SLAM, and deep learning-integrated SLAM. While previous studies have evaluated these methods on public datasets, real-world agricultural environments, particularly vineyards, present unique challenges due to their complexity, dynamism, and unstructured nature.
To bridge this gap, we conducted a comprehensive study in vineyards, collecting data under diverse conditions (flight modes, illumination conditions, and shooting angles) using a UAV equipped with high-resolution camera. To assess the performance of different methods, we proposed five evaluation metrics: efficiency, point cloud completeness, localization accuracy, parameter sensitivity, and plant-level spatial accuracy. We compared two SLAM approaches against SfM as a benchmark.
Our findings reveal that deep learning-based SLAM outperforms SfM and feature-based SLAM in terms of position accuracy and point cloud resolution. Deep learning-based SLAM reduced average position error by 87% and increased point cloud resolution by 571%. However, feature-based SLAM demonstrated superior efficiency, making it a more suitable choice for real-time applications. These results offer valuable insights for selecting appropriate methods, considering illumination conditions, and optimizing parameters to balance accuracy and computational efficiency in orchard management activities.
Effective segmentation of individual animals is crucial for monitoring cattle growth in precision livestock farming(PLF) when employing computer vision (CV) on 2D images or videos. However, similar research in 3D scenes has been an area with limited prior exploration. This study applied three-dimensional (3D) CV approaches in conjunction with a UAV-based LiDAR system, enabling automated 3D individual segmentation. During fieldwork, the UAV-based LiDAR system scanned an outdoor area where a herd of 96 male Simmental beef cattle rested after feeding, executing 36 flight plans. These plans resulted in 36 campaign point clouds, each obtained under diverse flight conditions encompassing a range of heights (8m, 10m, 15m, 30m, 50m) and speeds (1m/s, 2m/s, 3m/s, 5m/s, 7m/s, 9m/s). Of these, 30 were conducted in daytime light conditions, while 6 were made in nighttime conditions. To segment the point clouds for individual animal detection, a height difference threshold is employed to distinguish between lying and standing cattle. This is followed by the DBSCAN clustering algorithm to effectively separate standing animals, using the above-threshold portion. Finally, noise removal is employed to ensure complete cattle body point clouds. This procedure yielded 276 individual cattle point clouds from 400 standing animals, including 47 from nighttime scans. The study confirms the effectiveness of 3D individual segmentation through automatically segmenting standing cattle under natural husbandry conditions from UAV-based LiDAR point clouds. The method facilitates 3D cattle growth monitoring research such as body measurement assessment and body weight estimation in PLF.
Accurate body measurements are crucial for effective management of cattle growth in precision livestock farming. This study introduces a novel noncontact approach that leverages point clouds acquired by unmanned aerial vehicle (UAV) based LiDAR to obtain body measurements of cattle within their natural husbandry conditions. The experiment encompasses 36 LiDAR scanning campaigns, six during the nighttime, using various combinations of flight speed and height. An automated procedure for retrieving body measurements is applied to pre-processed cattle point clouds, with a total of 276 individual animals segmented from campaign-generated point clouds using an automated segmentation procedure. The procedure uses identified body-marks to extract six body measurements from each cattle point cloud. To enhance the accuracy of the extracted body measurement dataset, multivariate analysis of variance (MANOVA) is used, facilitating the adjustment of the dataset that excludes data derived at flight heights of 30 m and 50 m or flight speeds of 7 m∕s and 9 m∕s. The reference dataset validates the adjustment effectiveness, demonstrating substantial reductions in mean absolute error (MAE), such as the vertical gap measurement (ℎ1) on reference objects, from 2 cm to 2 mm. Furthermore, the study delves into anatomical hip height (HH') estimation by developing a 10-fold cross-validation linear regression model based on the training dataset of 136 pairs of waist height and hip height (HH) derived with a manually added auxiliary plane. The model yields the estimation of the HH' with R2 of 0.84, MAE of 0.012 m, and RMSE of 0.015 m. Moreover, this study proposes a dual rotation algorithm to normalise cattle head orientation. The results of this study contribute to the advancement of using UAV-based LiDAR for cattle growth management.
Preharvest crop yield estimation is crucial for achieving food security and managing crop growth. Unmanned aerial vehicles (UAVs) can quickly and accurately acquire field crop growth data and are important mediums for collecting agricultural remote sensing data. With the rapid development of machine learning, especially deep learning, research on yield estimation based on UAV remote sensing data and machine learning has achieved excellent results. This paper systematically reviews the current research of yield estimation research based on UAV remote sensing and machine learning through a search of 76 articles, covering aspects such as the grain crops studied, research questions, data collection, feature selection, optimal yield estimation models, and optimal growth periods for yield estimation. Through visual and narrative analysis, the conclusion covers all the proposed research questions. Wheat, corn, rice, and soybeans are the main research objects, and the mechanisms of nitrogen fertilizer application, irrigation, crop variety diversity, and gene diversity have received widespread attention. In the modeling process, feature selection is the key to improving the robustness and accuracy of the model. Whether based on single modal features or multimodal features for yield estimation research, multispectral images are the main source of feature information. The optimal yield estimation model may vary depending on the selected features and the period of data collection, but random forest and convolutional neural networks still perform the best in most cases. Finally, this study delves into the challenges currently faced in terms of data volume, feature selection and optimization, determining the optimal growth period, algorithm selection and application, and the limitations of UAVs. Further research is needed in areas such as data augmentation, feature engineering, algorithm improvement, and real-time yield estimation in the future.
Quick and accurate prediction of crop yields is beneficial for guiding crop field management and genetic breeding. This paper utilizes the fast and non-destructive advantages of an unmanned aerial vehicle equipped with a multispectral camera to acquire spatial characteristics of rice and conducts research on yield estimation in an open environment. The study proposes a yield estimation framework that hybrids synthetic minority oversampling technique (SMOTE) and deep neural network (DNN). Firstly, the framework used the Pearson correlation coefficient to select 10 key vegetation indices and determine the optimal feature combination. Secondly, it created a dataset for data augmentation through SMOTE, addressing the challenge of long data collection cycles and small sample sizes caused by long growth cycles. Then, based on this dataset, a yield estimation model was trained using DNN and compared with partial least squares regression (PLSR), support vector regression (SVR), and random forest (RF). The experimental results indicate that the hybrid framework proposed in this study performs the best (R² = 0.810, RMSE = 0.69 t/ha), significantly improving the accuracy of yield estimation compared to other methods, with an R² improvement of at least 0.191. It demonstrates that the framework proposed in this study can be used for rice yield estimation. Additionally, it provides a new approach for future yield estimation with small sample sizes for other crops or for predicting numerical crop indicators.
... Typical feature matching pipelines rely on heuristic filtering [1] or incorporate context at the matching stage [2], but they still struggle with uniform descriptors and repetitive spatial layouts [3], [4]. We propose keypoint semantic integration (KSI), a framework for enhancing the keypoint descriptors with semantic mask instances on which they are located. ...
... At the same time, images of the three pens were captured using the integrated RGB camera at a frequency of 1/3 Hz. Further details of the drone flight were illustrated by previous research of [26]. ...
... In recent years, UAVs have been widely used in the field of low-altitude and low-speed remote sensing of farmland [16]. UAVs are used for remote sensing because of their high flexibility, low operating costs and ability to capture highresolution images [17]. Moreover, multispectroscopy has been widely used in land-use planning, agricultural production, environmental protection, and other research fields because of its advantages such as a fast detection speed and the ability of on-site detection [18][19][20][21]. ...
... • Hybrid and State-of-the-art Models: Several researchers have attempted to combine deep learning algorithms and machine learning algorithms, as well as combining multiple deep learning algorithms. The yield estimation framework presented in [114] estimates rice yields using an unmanned aerial vehicle equipped with a multispectral camera. The framework selects important vegetation indices and generates a dataset for data augmentation using the deep neural network (DNN) and synthetic minority oversampling technique (SMOTE). ...
... Zhang et al. proposed an interactive segmentation method for dairy goats, integrating deep learning and user interaction to improve accuracy, offering an efficient tool for precision livestock monitoring [19]. Shu et al. introduced a semantic segmentation method based on an improved U-Net model, utilizing infrared imaging for automated cow face temperature collection, presenting a novel approach to enhance livestock health monitoring systems [20]. Segformer++ introduces efficient token-merging strategies to improve high-resolution semantic segmentation by reducing computational complexity, while maintaining high accuracy [21]. ...
... Vázquez-Diosdado et al (2024) have also used an AdaBoost ensemble learning algorithm to classify calves' play and non-play behaviours and achieved an overall accuracy greater than 94%. Zhang et al (2024) have classified between healthy calves and sick calves (with diarrhoea) using YOLOv8n deep learning model. Features such as standing time, lying time, number of lying bouts, and average bout duration (using video feeds) have been used for the model training, and the model achieved a mean average precision of 0.995. ...
... An increasing number of flight missions require drones to operate in clusters within complex, dynamic environments-such as urban canyons, mountainous terrain, and forests. [3,4]. However, in these environments, there are uncertain factors, such as complex and changeable obstacles, as well as limited communication and sensing capabilities of drones, making it difficult for the current methods based on preset group motion behavior control to quickly and dynamically adjust to adapt to environmental changes [5,6]. ...
... For instance, Pavlovic et al. (2021) focused on deep-learning-based dairy cow behavior classification using neck-mounted accelerometers, providing insights into the effectiveness of specific sensor placements. Jin et al. (2024) presented a deep learning model for sheep behavior classification, building upon previous work to offer improved accuracy and efficiency. Interestingly, while deep learning approaches have dominated recent research, some studies have continued to explore and refine more traditional machine learning methods. ...
... The main uses of ML identified in animal production are the detection of diseases (especially mastitis), the prediction of milk production, and the prediction of milk quality using classification or regression models (Slobe et al., 2021). Similarly, other studies in the area were recorded (Ruchay et al., 2022;Xu et al., 2024). ...
... Although classical method generally performs well for most phenological stages, accurately identifying the transition phase from vegetative growth to reproductive growth remains challenging. This includes not only the tasseling stage in maize [24], but also the booting and heading stages in rice [22,25], and the heading stage in wheat [19]. This difficulty may be attributed to the relatively small spectral texture changes, the short duration of these stages, and the limited number of training samples [22,25]. ...