The Community for Technology Leaders
2016 International Conference on Frontiers of Information Technology (FIT) (2016)
Islamabad, Pakistan
Dec. 19, 2016 to Dec. 21, 2016
ISBN: 978-1-5090-5300-1
pp: 297-302
This paper presents a novel landmark based audio fingerprinting algorithm for matching naval vessels' acoustic signatures. The algorithm incorporates joint time - frequency based approach with parameters optimized for application to acoustic signatures of naval vessels. The technique exploits the relative time difference between neighboring frequency onsets, which is found to remain consistent in different samples originating over time from the same vessel. The algorithm has been implemented in MATLAB and trialed with real acoustic signatures of submarines. The training and test samples of submarines have been acquired from resources provided by San Francisco National Park Association [14]. Storage requirements to populate the database with 500 tracks allowing a maximum of 0.5 Million feature hashes per track remained below 1GB. On an average PC, the database hash table can be populated with feature hashes of database tracks @ 1250 hashes/second achieving conversion of 120 seconds of audio data into hashes in less than a second. Under varying attributes such as time skew, noise and sample length, the results prove algorithm robustness in identifying a correct match. Experimental results show classification rate of 94% using proposed approach which is a considerable improvement as compared to 88% achieved by [17] employing existing state of the art techniques such as Detection Envelope Modulation On Noise (DEMON) [15] and Low Frequency Analysis and Recording (LOFAR) [16].
Time-frequency analysis, Databases, Acoustics, Sonar, Propellers, Fingerprint recognition,pattern recognition, Under water warfare, acoustic signatures, naval vessels, audio fingerprinting
Muhammad Abdur Rehman Hashmi, Rana Hammad Raza, "Landmark Based Audio Fingerprinting for Naval Vessels", 2016 International Conference on Frontiers of Information Technology (FIT), vol. 00, no. , pp. 297-302, 2016, doi:10.1109/FIT.2016.061
89 ms
(Ver 3.3 (11022016))