Project

Automated forestry and logging operations

Goal: The goal of the project is to partially automate logging and forestry operations, to amplify human control and productivity. We aim at employing state-of-the-art algorithms from mobile robotics (SLAM, Navigation, LiDAR) and machine learning to reach this goal.

Date: 1 January 2016

Updates
0 new
4
Recommendations
0 new
1
Followers
0 new
31
Reads
2 new
408

Project log

Philippe Giguère
added a research item
Vision-based segmentation in forested environments is a key functionality for autonomous forestry operations such as tree felling and forwarding. Deep learning algorithms demonstrate promising results to perform visual tasks such as object detection. However, the supervised learning process of these algorithms requires annotations from a large diversity of images. In this work, we propose to use simulated forest environments to automatically generate 43k realistic synthetic images with pixel-level annotations, and use it to train deep learning algorithms for tree detection. This allows us to address the following questions: i) what kind of performance should we expect from deep learning in harsh synthetic forest environments, ii) which annotations are the most important for training, and iii) what modality should be used between RGB and depth. We also report the promising transfer learning capability of features learned on our synthetic dataset by directly predicting bounding box, segmentation masks and keypoints on real images.
Philippe Giguère
added a research item
Wood logs picking is a challenging task to automate. Indeed, logs usually come in cluttered configurations, randomly orientated and overlapping. Recent work on log picking automation usually assume that the logs' pose is known, with little consideration given to the actual perception problem. In this paper, we squarely address the latter, using a data-driven approach. First, we introduce a novel dataset, named TimberSeg 1.0, that is densely annotated, i.e., that includes both bounding boxes and pixel-level mask annotations for logs. This dataset comprises 220 images with 2500 individually segmented logs. Using our dataset, we then compare three neural network architectures on the task of individual logs detection and segmentation; two region-based methods and one attention-based method. Unsurprisingly, our results show that axis-aligned proposals, failing to take into account the directional nature of logs, underperform with 19.03 mAP. A rotation-aware proposal method significantly improve results to 31.83 mAP. More interestingly, a Transformer-based approach, without any inductive bias on rotations, outperformed the two others, achieving a mAP of 57.53 on our dataset. Our use case demonstrates the limitations of region-based approaches for cluttered, elongated objects. It also highlights the potential of attention-based methods on this specific task, as they work directly at the pixel-level. These encouraging results indicate that such a perception system could be used to assist the operators on the short-term, or to fully automate log picking operations in the future.
Philippe Giguère
added a research item
Challenges inherent to autonomous wintertime navigation in forests include lack of reliable a Global Navigation Satellite System (GNSS) signal, low feature contrast, high illumination variations and changing environment. This type of off-road environment is an extreme case of situations autonomous cars could encounter in northern regions. Thus, it is important to understand the impact of this harsh environment on autonomous navigation systems. To this end, we present a field report analyzing teach-and-repeat navigation in a subarctic region while subject to large variations of meteorological conditions. First, we describe the system, which relies on point cloud registration to localize a mobile robot through a boreal forest, while simultaneously building a map. We experimentally evaluate this system in over 18.6 km of autonomous navigation in the teach-and-repeat mode. We show that dense vegetation perturbs the GNSS signal, rendering it unsuitable for navigation in forest trails. Furthermore, we highlight the increased uncertainty related to localizing using point cloud registration in forest corridors. We demonstrate that it is not snow precipitation, but snow accumulation that affects our system's ability to localize within the environment. Finally, we expose some lessons learned and challenges from our field campaign to support better experimental work in winter conditions.
Philippe Giguère
added an update
Forestry plays a significant role in many economies, yet productivity gains of human operators have more or less plateaued in the last decade. Starting five years ago, we have been exploring how currently-existing breakthroughs in AI (Deep Learning) and mobile robotics (3D mapping) could be used to alleviate the workload of forestry operators, and thereby improve productivity. At the same time, forests environment provides challenges to many algorithms and techniques, and consequently offer the opportunity to extend the scientific and technical state-of-the-art.
In this presentation, we will start by going over some of our results. In particular, we will discuss the performance of using bark images for two tasks: tree species identification, and fingerprinting for tree re-identification. We will then discuss ongoing efforts on large-scale 3D mapping of forests with mobile robots, and how well we can estimate the key statistic of Diameter at Breast Height (DBH) from reconstructed point clouds. We will conclude by presenting our new project related to the automation of tree felling and forwarding.
Presented remotely at IROS 2020 Workshop on Perception, Planning and Mobility in Forestry Robotics (WPPMFR 2020).
 
Jean-François Tremblay
added a research item
Forestry is a major industry in many parts of the world, yet this potential domain of application area has been overlooked by the robotics community. For instance, forest inventory, a cornerstone of efficient and sustainable forestry, is still traditionally performed manually by qualified professionals. The lack of automation in this particular task, consisting chiefly of measuring tree attributes, limits its speed, and, therefore, the area that can be economically covered. To this effect, we propose to use recent advancements in three‐dimensional mapping approaches in forests to automatically measure tree diameters from mobile robot observations. While previous studies showed the potential for such technology, they lacked a rigorous analysis of diameter estimation methods in challenging and large‐scale forest environments. Here, we validated multiple diameter estimation methods, including two novel ones, in a new publicly‐available dataset which includes four different forest sites, 11 trajectories, totaling 1458 tree observations, and 14,000 m². From our extensive validation, we concluded that our mapping method is usable in the context of automated forest inventory, with our best diameter estimation method yielding a root mean square error of 3.45 cm for our whole dataset and 2.04 cm in ideal conditions consisting of mature forest with well‐spaced trees. Furthermore, we release this dataset to the public (https://norlab.ulaval.ca/research/montmorencydataset), to spur further research in robotic forest inventories. Finally, stemming from this large‐scale experiment, we provide recommendations for future deployments of mobile robots in a forestry context.
Philippe Giguère
added a research item
The ability to visually re-identify objects is a fundamental capability in vision systems. Oftentimes, it relies on collections of visual signatures based on descriptors, such as Scale Invariant Feature Transform (SIFT) or Speeded Up Robust Features (SURF). However, these traditional descriptors were designed for a certain domain of surface appearances and geometries (limited relief). Consequently, highly-textured surfaces such as tree bark pose a challenge to them. In turns, this makes it more difficult to use trees as identifiable landmarks for navigational purposes (robotics) or to track felled lumber along a supply chain (logistics). We thus propose to use data-driven descriptors trained on bark images for tree surface re-identification. To this effect, we collected a large dataset containing 2,400 bark images with strong illumination changes, annotated by surface and with the ability to pixel-align them. We used this dataset to sample from more than 2 million 64x64 pixel patches to train our novel local descriptors DeepBark and SqueezeBark. Our DeepBark method has shown a clear advantage against the hand-crafted descriptors SIFT and SURF. Furthermore, we demonstrated that DeepBark can reach a Precision@1 of 99.8% in a database of 7,900 images with only 11 relevant images. Our work thus suggests that re-identifying tree surfaces in a challenging context is possible, while making public a new dataset.
Philippe Giguère
added 2 research items
To help future mobile agents plan their movement in harsh environments, a predictive model has been designed to determine what areas would be favorable for Global Navigation Satellite System (GNSS) positioning. The model is able to predict the number of viable satellites for a GNSS receiver, based on a 3D point cloud map and a satellite constellation. Both occlusion and absorption effects of the environment are considered. A rugged mobile platform was designed to collect data in order to generate the point cloud maps. It was deployed during the Canadian winter known for large amounts of snow and extremely low temperatures. The test environments include a highly dense boreal forest and a university campus with high buildings. The experiment results indicate that the model performs well in both structured and unstructured environments.
The ability to map challenging sub-arctic environments opens new horizons for robotic deployments in industries such as forestry, surveillance, and open-pit mining. In this paper, we explore possibilities of large-scale lidar mapping in a boreal forest. Computational and sensory requirements with regards to contemporary hardware are considered as well. The lidar mapping is often based on the SLAM technique relying on pose graph optimization, which fuses the Iterative Closest Point (ICP) algorithm, Global Navigation Satellite System (GNSS) positioning, and Inertial Measurement Unit (IMU) measurements. To handle those sensors directly within the ICP minimization process, we propose an alternative approach of embedding external constraints. Furthermore, a novel formulation of a cost function is presented and cast into the problem of handling uncertainties from GNSS and lidar points. To test our approach, we acquired a large-scale dataset in the Foret Montmorency research forest. We report on the technical problems faced during our winter deployments aiming at building 3D maps using our new cost function. Those maps demonstrate both global and local consistency over 4.1km.
Philippe Giguère
added a research item
Video available at: https://www.youtube.com/watch?v=dJ8eIOvcGPw Forestry is a major industry in many parts of the world. It relies on forest inventory, which consists of measuring tree attributes. We propose to use 3D mapping, based on the iterative closest point algorithm, to automatically measure tree diameters in forests from mobile robot observations. While previous studies showed the potential for such technology, they lacked a rigorous analysis of diameter estimation methods in challenging forest environments. Here, we validated multiple diameter estimation methods, including two novel ones, in a new varied dataset of four different forest sites, 11 trajectories, totaling 1458 tree observations and 1.4 hectares. We provide recommendations for the deployment of mobile robots in a forestry context. We conclude that our mapping method is usable in the context of automated forest inventory, with our best method yielding a root mean square error of 3.45 cm for our whole dataset, and 2.04 cm in ideal conditions consisting of mature forest with well spaced trees.
Philippe Giguère
added a research item
Grasping is a fundamental robotic task needed for the deployment of household robots or furthering warehouse automation. However, few approaches are able to perform grasp detection in real time (frame rate). To this effect, we present Grasp Quality Spatial Transformer Network (GQ-STN), a one-shot grasp detection network. Being based on the Spatial Transformer Network (STN), it produces not only a grasp configuration, but also directly outputs a depth image centered at this configuration. By connecting our architecture to an externally-trained grasp robustness evaluation network, we can train efficiently to satisfy a robustness metric via the backpropagation of the gradient emanating from the evaluation network. This removes the difficulty of training detection networks on sparsely annotated databases, a common issue in grasping. We further propose to use this robustness classifier to compare approaches, being more reliable than the traditional rectangle metric. Our GQ-STN is able to detect robust grasps on the depth images of the Dex-Net 2.0 dataset with 92.4 % accuracy in a single pass of the network. We finally demonstrate in a physical benchmark that our method can propose robust grasps more often than previous sampling-based methods, while being more than 60 times faster.
Jean-François Tremblay
added a research item
Terrestrial laser scanning (TLS) often makes use of multiple scans in forests to allow for a complete view of a given area. Combining measurements from multiple locations requires accurate co-registration of the scans to a common reference coordinate system, which currently relies on markers, an often cumbersome process in forests. Existing algorithms for achieving marker-free registration of TLS scans in forests promise to significantly decrease field work time, but are not yet operational and their results have not been validated against traditional methods. Here we present a new implementation of an existing approach which runs in parallel mode and is able to process TLS data acquired over large forest areas. To validate our algorithm, point cloud registration matrices (translation and rotation) derived from our algorithm were compared to those obtained using reflective markers in multiple forest types. The results show that our approach can be used operationally in forests with relatively clear understory, and it provides accuracy similar to that obtained from using reflective markers. Furthermore, we identified factors that can lead to this approach falling short of providing acceptable results in terms of accuracy.
Philippe Giguère
added an update
Data acquisition and map creation performed by Master Student Jean-François Tremblay, in co-supervision with Martin Beland. 3D map building software based on libpointmatcher, by Prof. François Pomerleau.
 
Philippe Giguère
added 6 research items
Enabling automated 3D mapping in forests is an important component of the future development of forest technology, and has been garnering interest in the scientific community, as can be seen from the many recent publications. Accordingly, the authors of the present paper propose the use of a Simultaneous Localisation and Mapping algorithm, called graph-SLAM, to generate local maps of forests. In their study, the 3D data required for the mapping process were collected using a custom-made, mobile platform equipped with a number of sensors, including Velodyne VLP-16 LiDAR, a stereo camera, an IMU, and a GPS. The 3D map was generated solely from laser scans, first by relying on laser odometry and then by improving it with robust graph optimisation after loop closures, which is the core of the graph-SLAM algorithm. The resulting map, in the form of a 3D point cloud, was then evaluated in terms of its accuracy and precision. Specifically, the accuracy of the fitted diameter at breast height (DBH) and the relative distance between the trees were evaluated. The results show that the DBH estimates using the Pratt circle fit method could enable a mean estimation error of approximately 2 cm (7–12%) and an RMSE of 2.38 cm (9%), whereas for tree positioning accuracy, the mean error was 0.0476 m. The authors conclude that robust SLAM algorithms can support the development of forestry by providing cost-effective and acceptable quality methods for forest mapping. Moreover, such maps open up the possibility for precision localisation for forestry vehicles.
Philippe Giguère
added a project goal
The goal of the project is to partially automate logging and forestry operations, to amplify human control and productivity. We aim at employing state-of-the-art algorithms from mobile robotics (SLAM, Navigation, LiDAR) and machine learning to reach this goal.