Vision based crown loss estimation for individual trees with remote aerial robots

Boon Ho, Basaran Bahadir Kocer, Mirko Kovac*

*Corresponding author for this work

Research output: Contribution to journalArticle (Academic Journal)peer-review

12 Citations (Scopus)

Abstract

With the capability of capturing high-resolution imagery data and the ease of accessing remote areas, aerial robots are becoming increasingly popular for forest health monitoring applications. For example, forestry tasks such as field surveys and foliar sampling which are generally manual and labour intensive can be automated with remotely controlled aerial robots. In this study, we propose two new online frameworks to quantify and rank the severity of individual tree crown loss. The real-time crown loss estimation (RTCLE) model localises and classifies individual trees into their respective crown loss percentage bins. Experiments are conducted to investigate if synthetically generated tree images can be used to train the RTCLE model as real images with diverse viewpoints are generally expensive to collect. Results have shown that synthetic data training helps to achieve a satisfactory baseline mean average precision (mAP) which can be further improved with just some additional real imagery data. We showed that the mAP can be increased approximately from 60% to 78% by mixing the real dataset with the generated synthetic data. For individual tree crown loss ranking, a two-step crown loss ranking (TSCLR) framework is developed to handle the inconsistently labelled crown loss data. The TSCLR framework detects individual trees before ranking them based on some relative crown loss severity measures. The tree detection model is trained with the combined dataset used in the RTCLE model training where we achieved an mAP of approximately 95% suggesting that the model generalises well to unseen datasets. The relative crown loss severity of each tree is estimated, with deep representation learning, by a probabilistic encoder from a fully trained variational autoencoder (VAE) model. The VAE is trained end-to-end to reconstruct tree images in a background agnostic way. Based on a conservative evaluation, the estimated crown loss severity from the probabilistic encoder generally showed moderate agreement with the expert's estimation across all species of trees present in the dataset. All the software pipelines, the dataset, and the synthetic dataset generation can be found in the GitHub link.

Original languageEnglish
Pages (from-to)75-88
Number of pages14
JournalISPRS Journal of Photogrammetry and Remote Sensing
Volume188
Early online date11 Apr 2022
DOIs
Publication statusPublished - Jun 2022

Bibliographical note

Funding Information:
This work was partially supported by funding from EPSRC (award No. EP/N018494/1, EP/R026173/1, EP/R009953/1, EP/S031464/1, EP/W001136/1), NERC (award No. NE/R012229/1) and the EU H2020 AeroTwin project (grant ID 810321). Mirko Kovac is supported by the Royal Society Wolfson fellowship (RSWF/R1/18003). The visual evaluations were based on data from the Swiss Long-term Forest Ecosystem Research programme LWF ( www.lwf.ch ), which is part of the UNECE Co-operative Programme on Assessment and Monitoring of Air Pollution Effects on Forests ICP Forests ( www.icp-forests.net ). We are in particular grateful to C Hug for providing WSL dataset. We also thank Dr. Richard Buggs for valuable comments and discussions on the associated problem.

Publisher Copyright:
© 2022 The Author(s)

Keywords

  • Aerial robots
  • Convolutional neural network
  • Crown loss estimation
  • Foliar sampling
  • Unmanned aerial vehicles
  • Variational autoencoder

Fingerprint

Dive into the research topics of 'Vision based crown loss estimation for individual trees with remote aerial robots'. Together they form a unique fingerprint.

Cite this