Transformer Models improve the acoustic recognition of buzz-pollinating bee species
Keywords: ecosystem services, deep learning, Buzz-pollinated crops, Crop pollination
Abstract
Buzz-pollinated crops, such as tomatoes, potatoes, kiwifruit, and blueberries, are among the highest-yielding agricultural products. The flowers of these cultivated plants are characterized by having a specialized flower morphology with poricidal anthers that require vibration to achieve a full seed set. At least 446 bee species, in 82 genera, use floral sonication (buzz pollination) to collect pollen grains as food. Identifying and classifying these diverse often look-alike bee species poses a challenge for taxonomists. Automated classification systems, based upon audible bee floral buzzes, have been investigated to meet this need. Recently, convolutional neural network (CNN) models have demonstrated superior performance in recognizing and distinguishing bee- buzzing sounds compared to classical Machine-Learning (ML) classifiers. Nonetheless, the performance of CNNs remains unsatisfactory and can be improved. Therefore, we applied a novel transformer-based neural network architecture for the task of acoustic recognition of blueberry-pollinating bee species. We further compared the performance of the Audio Spectrogram Transformer (AST) model and its variants, including Self-Supervised AST (SSAST) and Masked Autoencoding AST (MAE-AST), to that of strong baseline CNN models based on previous work, at the task of bee species recognition. We also employed data augmentation techniques and evaluated these models with a data set of bee sounds recorded during visits to blueberry flowers in Chile (518 audio samples of 15 bee species). Our results revealed that Transformer-based Neural Networks combined with pre-training and data augmentation outperformed CNN models (maximum F1-score: 64.5% +/- 2; Accuracy: 82.2% +/- 0.8). These innovative attention-based neural network architectures have demonstrated exceptional performance in assigning bee buzzing sounds to their respective taxonomic categories, outperforming prior deep learning models. However, transformer approaches face challenges related to small dataset size and class imbalance, similar to CNNs and classical ML algorithms. Combining pre-training with data augmentation is crucial to increase the diversity and robustness of training data sets for the acoustic recognition of bee species. We document the potential of transformer architectures to improve the performance of audible bee species identification, offering promising new avenues for bioacoustic research and pollination ecology.
Más información
Título según WOS: | Transformer Models improve the acoustic recognition of buzz-pollinating bee species |
Título de la Revista: | ECOLOGICAL INFORMATICS |
Volumen: | 86 |
Editorial: | Elsevier |
Fecha de publicación: | 2025 |
Idioma: | English |
DOI: |
10.1016/j.ecoinf.2025.103010 |
Notas: | ISI |