Combining Audio and Non-Audio Inputs in Evolved Neural Networks for Ovenbird
By: Sergio Poo Hernandez, Vadim Bulitko, Erin Bayne
Potential Business Impact:
Helps computers identify animals by sound and facts.
In the last several years the use of neural networks as tools to automate species classification from digital data has increased. This has been due in part to the high classification accuracy of image classification through Convolutional Neural Networks (CNN). In the case of audio data CNN based recognizers are used to automate the classification of species in audio recordings by using information from sound visualization (i.e., spectrograms). It is common for these recognizers to use the spectrogram as their sole input. However, researchers have other non-audio data, such as habitat preferences of a species, phenology, and range information, available that could improve species classification. In this paper we present how a single-species recognizer neural network's accuracy can be improved by using non-audio data as inputs in addition to spectrogram information. We also analyze if the improvements are merely a result of having a neural network with a higher number of parameters instead of combining the two inputs. We find that networks that use the two different inputs have a higher classification accuracy than networks of similar size that use only one of the inputs.
Similar Papers
Convolutional Neural Network Optimization for Beehive Classification Using Bioacoustic Signals
Sound
Listens to bees to know if hive is healthy.
Improving Bird Classification with Primary Color Additives
CV and Pattern Recognition
Colors help computers tell bird songs apart.
A Bird Song Detector for improving bird identification through Deep Learning: a case study from Doñana
Sound
Helps scientists find birds by listening to sounds.