An empirical comparison of three audio fingerprinting methods in music and feature-length film

Thanh Pham, Matthew Giamou, Gerald Penn

Abstract


An empirical comparison of three audio fingerprinting methods in music and feature-length film is presented. Shazam, a commercially successful algorithm was chosen and against two vision-based algorithms, the original CMU algorithm, and also Google's Waveprint algorithm. The song or film of each of the queries corresponding to the respective dataset in turn is identified. The feature-length film dataset was transcoded and down- sampled from 48KHz to 16KHz mono-channel PCM with 256 kbps bitrates. The experiments were conducted on a machine with a single 3.0GHz Intel Xeon CPU with a 4MB cache and 16GB RAM. The F-measures on the two datasets show that optimizations for quality pay very high dividends on film audio, but not on music data. It is also found that the Shazam handily outperforms both vision- based algorithms, in both time and quality.

Keywords


Algorithms; Audio fingerprinting; Bitrates; Data sets; Empirical comparison; Music data; Vision based algorithms

Full Text:

PDF

Refbacks

  • There are currently no refbacks.