Naor David, Raid Saabni
Bats have been well known for classifying plants using echolocation. They emit ultrasonic waves and can recognize the plant according to the echo returning from it, which is vital for the bat to complete tasks. We created a dataset consisting of depicted scenes of bats echolocating around plants using an acoustic simulation, which contains different plant types and their corresponding returned echoes to the emitter. An end-to-end solution was devised for classifying a plant type from a single echo by exploring and testing different variants of Convolutional Neural Network (CNN) architectures, that outperform other baseline models, such as Support Vector Machine (SVM) and Multi-Layered Perceptrons (MLP) which were previously used to classify the echoes. Different Machine Learning models are compared against each other, by measuring the model’s classification performance on generated echo dataset. We explore different data representations of a bat’s echo, such as time and spectral representations, and observe how adequate is each of the representations to each classifier architecture. Additionally, we present improvements to the model that increases its robustness, thus improving the overall practicality of the solution. Conclusively, we discuss the evaluation results and sketch an overall idea of incorporating the concept of Optical Flow in the context of echolocation (Acoustic Flow).