All content in this area was uploaded by Redmond R. Shamshiri on Feb 12, 2019
Content may be subject to copyright.
A preview of the PDF is not available
... Land suitability is also assessed from considerations on soil components inferred from high-resolution digital images ( Rendana et al 2015). To optimize oil palm production, close-range photogrammetry is now a technique for precise agriculture (Shamshiri et al, 2017). In all these applications, none of the studies develop a hypothesis from tree colour variation, tree height variation, vegetation density, and topography. ...
... While there are few instances of applications in the assessment of land suitability in oil palm plantations (Shamshiri et al, 2017) close range photogrammetry is basically for other usages in the rubber tree plantations. The study by Stark et al (2017) on managing a rubber tree plantation does not include evaluations for terrain topology. ...
The foremost and very important stage of any precise farming is the choice of land. Land for oil palm and rubber tree farming usually extends into hundreds of hectares which amount to rigorous fieldwork with conventional ground surveying techniques. In recent past, satellite remote sensing and aerial photogrammetry are used in large and medium scale mapping to ascertain land suitability for these farms. However, these techniques are capital intensive haven constraints on flexibility and frequency of surveys. Image resolutions also do not attain sub-pixel accuracies. Aside from terrain topography, other variables that verify critical inferences on land suitability are colour variation, height variation, and vegetation density. This study, therefore, examines the applicability of close-range photogrammetry in aiding critical management decision for land suitability on medium and large scale oil palm and rubber tree farming. The DJI Phantom 3 standard drone was used to Survey parts of the Nigeria Institute for Oil Palm Research and part of Rubber Research Institute of Nigeria both in Benin City, Nigeria. The aim is to ascertain the effectiveness of the drone's 1/2.3" CMOS 12 Megapixel Sensor and the inbuilt GPS for providing three-dimensional high-resolution images for deciding on topography, colour variations, height variations and vegetation density. The Pix4D and Agisoft software were both used to process the images. Digital models generated have 10cm spatial resolution on a three bandwidth able to delineate the spectral reflectance of the trees. Results obtained from this initial study encourage the need for further work.
... Vegetation indices are widely used for the estimation of crop status based on the amount of chlorophyll content by using visible and near-infrared (NIR) regions of the electromagnetic spectrum. Chlorophylls have strong absorption peaks in the red region and high reflectance peaks in the near-infrared region (Shamshiri et al., 2017). Maximal absorbance in the red region occurs between 660 nm and 680 nm. ...
Precision agriculture is a concept of agricultural management, based on analyzing, measuring, and reacting to inter and intra-field variability in crops. One of the tools deployed for crop monitoring in precision agriculture is the use of an unmanned aerial vehicle, able to obtain high flexibility with fewer restrictions, and high spatial and spectral resolution in comparison to airborne and spaceborne system. In this paper, the assessment of various vegetation indices were performed for paddy stress monitoring using red edge band from multispectral imagery. The objective of the study was to create rice field maps with the use of aerial imagery and object-based image analysis technique to validate vegetative indices in rice field maps by using soil plant analysis development (SPAD) data. The result showed Normalized Difference Vegetation Index (R=0.957), Normalized Difference Red Edge (NDRE) (R=0.974), Soil Adjusted Vegetation Index (R=0.964), and Optimized Soil Adjusted Vegetation Index (R=0.966), all of which provided positive linear correlations with SPAD readings. NDRE showed higher correlation compared with other vegetation indices, exhibiting a better measure ment for farmers to make decisions. This paper has demonstrated how aerial imagery can be used to collect an accurate mapping in real time that can be analyzed to monitor Ang Yuhao, Nik Norasma Che'Ya, Nor Athirah Roslin and Mohd Razi Ismail 780 Pertanika J. Sci. & Technol. 28 (3): 779-795 (2020) conditions of crop and chlorophyll content by using SPAD to enable farmers to make informed decisions. Further investigations need to be carried out by validating the real chlorophyll content to improve existing correlations.
Oil palm tree is an important cash crop in Thailand. To maximize the productivity from planting, oil palm plantation managers need to know the number of oil palm trees in the plantation area. In order to obtain this information, an approach for palm tree detection using high resolution satellite images is proposed. This approach makes it possible to count the number of oil palm trees in a plantation. The process begins with the selection of the vegetation index having the highest discriminating power between oil palm trees and background. The index having highest discriminating power is then used as the primary feature for palm tree detection. We hypothesize that oil palm trees are located at the local peak within the oil palm area. To enhance the separability between oil palm tree crowns and background, the rank transformation is applied to the index image. The local peak on the enhanced index image is then detected by using the non-maximal suppression algorithm. Since both rank transformation and non-maximal suppression are window based, semi-variogram analysis is used to determine the appropriate window size. The performance of the proposed method was tested on high resolution satellite images. In general, our approach uses produced very accurate results, e. g., about 90 percent detection rate when compared with manual labeling.
The main objective of this research was to establish a semiautomated object-based image analysis (OBIA) methodology for locating landslides. We have detected and delineated landslides within a study area in north-western Iran using normalized dif-ference vegetation index (NDVI), brightness, and textural features derived from satellite imagery (IRS-ID and SPOT-5) in combi-nation with slope and flow direction derivatives from a digital elevation model (DEM) and topographically oriented gray-level cooccurrence matrices (GLCMs). We utilized particular combina-tions of these information layers to generate objects by applying multiresolution segmentation in a sequence of feature selection and object classification steps. The results were validated by using a landslide inventory database including 109 landslide events. In this study, a combination of these parameters led to a high accuracy of landslide delineation yielding an overall accuracy of 93.07%. Our results confirm the potential of OBIA for accurate delineation of landslides from satellite imagery and, in particu-lar, the ability of OBIA to incorporate heterogeneous parameters such as DEM derivatives and surface texture measures directly in a classification process. The study contributes to the establish-ment of geographic object-based image analysis (GEOBIA) as a paradigm in remote sensing and geographic information science. Index Terms—GIScience, gray-level cooccurrence matrix (GLCM), landslide mapping, object-based image analysis (OBIA), remote sensing, rule-based classification, textural analysis.
Image registration has been long used as a basis for the detection of moving objects. Registration techniques attempt to discover correspondences between consecutive frame pairs based on image appearances under rigid and affine transformations. However, spatial information is often ignored, and different motions from multiple moving objects cannot be efficiently modeled. Moreover, image registration is not well suited to handle occlusion that can result in potential object misses. This paper proposes a novel approach to address these problems. First, segmented video frames from unmanned aerial vehicle captured video sequences are represented using region adjacency graphs of visual appearance and geometric properties. Correspondence matching (for visible and occluded regions) is then performed between graph sequences by using multigraph matching. After matching, region labeling is achieved by a proposed graph coloring algorithm which assigns a background or foreground label to the respective region. The intuition of the algorithm is that background scene and foreground moving objects exhibit different motion characteristics in a sequence, and hence, their spatial distances are expected to be varying with time. Experiments conducted on several DARPA VIVID video sequences as well as self-captured videos show that the proposed method is robust to unknown transformations, with significant improvements in overall precision and recall compared to existing works.
Land-cover maps provide essential data for a wide range of practical and small-scale applications. A number of data sources appropriate for land-cover extraction are available. Among these, images captured using unmanned aerial vehicles (UAVs) are low cost, have very high resolution, and can be acquired at any time with few restrictions. Over the past two decades, various classification techniques have been developed to extract land-cover features from UAV images, and object-based image analysis (OBIA) is the preferred technique based on the recent literature. This study presents a novel method that integrates the fuzzy unordered rule induction algorithm (FURIA) into OBIA to achieve accurate land-cover extraction from UAV images. The images were segmented using a multiresolution segmentation algorithm with an optimized scale parameter. The scale parameter was optimized using a novel approach that integrated feature space optimization into the plateau objective function. During the classification stage, significant features were selected via random forest, and rule sets were developed using FURIA. For comparison, result of the proposed approach was compared with those of decision tree (DT) rules and the Support Vector Machine (SVM) classification method. The results of this study indicate that the proposed method outperforms DT and SVM with an overall accuracy of 91.23%. A transferability evaluation showed that FURIA achieved accurate classification results on different UAV image subsets captured at different times. The findings suggest that fuzzy rules are more appropriate than conventional crisp rules for land-cover extraction from UAV images.
Project FiRE (First Response Experiment), a disaster management technology demonstration, was performed in 2001. The experiment demonstrated the use of a thermal multispectral scanning imager, integrated on an unmanned aerial vehicle (UAV), a satellite uplink/downlink image data telemetry system, and near-real-time geo-rectification of the resultant imagery for data distribution via the Internet to disaster managers. The FiRE demonstration provided geo-corrected image data over a controlled burn to a fire management community in near-real-time by means of the melding of new technologies. The use of the UAV demonstrated remotely piloted flight (thereby reducing the potential for loss of human life during hazardous missions), and the ability to “linger and stare” over the fire for extended periods of time (beyond the capabilities of human-pilot endurance). Improvements in a high-temperature calibrated thermal imaging scanner allowed “remote” operations from a UAV and provided real-time accurate fire information collection over a controlled burn. Improved bit-rate capacity telemetry capabilities increased the amount, structure, and information content of the image data relayed to the ground. The integration of precision navigation instrumentation allowed improved accuracies in geo-rectification of the resultant imagery, easing data ingestion and overlay in a GIS framework. We present a discussion of the feasibility of utilizing new platforms, improved sensor configurations, improved telemetry, and new geo-correction software to facilitate wildfire management and mitigation strategies.
Unmanned Rotorcraft Systems explores the research and development of fully-functional miniature UAV (unmanned aerial vehicle) rotorcraft, and provides a complete treatment of the design of autonomous miniature rotorcraft UAVs. The unmanned system is an integration of advanced technologies developed in communications, computing, and control areas, and is an excellent testing ground for trialing and implementing modern control techniques. Included are detailed expositions of systematic hardware construction, software systems integration, aerodynamic modeling; and automatic flight control system design.Emphasis is placed on the cooperative control and flight formation of multiple UAVs, vision-based ground target tracking, and landing on moving platforms. Other issues such as the development of GPS-less indoor micro aerial vehicles and vision-based navigation are also discussed in depth: utilizing the vision-based system for accomplishing ground target tracking, attacking and landing, cooperative control and flight formation of multiple unmanned rotorcraft; and future research directions on the related areas.
The Idaho National Laboratory (INL), in conjunction with the University of Idaho, is evaluating novel approaches for using unmanned aerial vehicles (UAVs) as a quicker and safer method for monitoring biotic resources. Evaluating vegetative cover is an important factor in understanding the sustainability of many ecosystems. In assessing vegetative cover, methods that improve accuracy and cost efficiency could revolutionize how biotic resources are monitored on western federal lands. Sagebrush steppe ecosystems provide important habitat for a variety of species, some of which are important indicator species (e.g., sage grouse). Improved methods are needed to support monitoring these habitats because there are not enough resource specialists or funds available for comprehensive ground evaluation of these ecosystems. In this project, two types of UAV platforms (fixed wing and helicopter) were used to collect still-frame imagery to assess cover in sagebrush steppe ecosystems. This paper discusses the process for collecting and analyzing imagery from the UAVs to (1) estimate total percent cover, (2) estimate percent cover for six different types of vegetation, and (3) locate sage grouse based on representative decoys. The field plots were located on the INL site west of Idaho Falls, Idaho, in areas with varying amounts and types of vegetative cover. A software program called SamplePoint developed by the U.S. Department of Agriculture, Agricultural Research Service was used to evaluate the imagery for percent cover for the six vegetation types (bare ground, litter, shrubs, dead shrubs, grasses, and forbs). Results were compared against standard field measurements to assess accuracy.
In this paper, we propose a fast pattern matching algorithm based on normalized cross correlation (NCC) with centroid bounding to achieve very efficient search. The algorithm will calculate histogram around centroid within maximum circle with radius R. After dividing the image into blocks by R×R size, calculating the similarity between the color histograms of the image block and centroid around circle to get potential blocks that the centroid of the template might be in, then by applying NCC to get the final result. Experimental results show the proposed algorithm is very efficient comparing with full-search NCC. The results has broad applications in the fields of object detecting, image retrieval and etc.
This paper presents an overview of ongoing research on small unmanned autonomous vehicles (UAVs) for cooperative remote sensing for real-time water management and irrigation control. Small UAVs can carry embedded cameras with different wavelength bands, which are low-cost but have high spatial-resolution. These imagers mounted on UAVs can form a camera array to perform multispectral imaging with bands reconfigurable dependent on mission. Development of essential subsystems, such as the UAV platforms, embedded multispectral imagers, and image stitching and registration, is introduced together with real UAV flight test results of three example missions. Finally, an outline of future research efforts is presented.