Pruning is a key task in pear orchards, essential for maintaining tree health, improving fruit quality, and ensuring long-term productivity. However, determining which branches to prune is a complex process that depends on tree structure, growth stage, and pruning strategy. For autonomous systems, this task is particularly challenging due to the variability of orchard environments and the structural complexity of trees.
In this paper, we present a method to classify and predict branches to be pruned using 3D reconstruction and machine learning. To this end, we collect multi-view RGB-D images of pear trees in orchard conditions and reconstruct partial 3D models of the trees using the TEASER++ algorithm. By comparing pre- and post-pruning models, we automatically label branches that were pruned, thereby generating a dataset: BRANCH\_vol2, suitable for both reconstruction analysis and machine learning training. Based on this dataset, we train a PointNet++ neural network to predict pruning decisions directly on point clouds.
With this research, we aim to establish a starting point for automated pruning systems capable of assisting or replacing manual labor in orchards. Our approach demonstrates promising results, providing both the accuracy and efficiency needed to move toward practical robotic pruning in real-world agricultural environments.
****
Dukić, Jana; Pejić, Petra; Vidović, Ivan; Nyarko, Emmanuel Karlo
Towards Robotic Pruning: Automated Annotation and Prediction of Branches for Pruning on Trees Reconstructed Using RGB-D Images // Sensors, 25 (2025), 18; 5648-5673.
doi: https://doi.org/10.3390/s25185648
****