Minimizing annotation effort for adaptation of speech-activity detection systems

Annotating audio data for the presence and location of speech is a time-consuming and therefore costly task. This is mostly because annotation precision greatly affects the performance of the speech-activity detection (SAD) systems trained with this data, which means that the annotation process must...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Ferrer, L., Graciarena, M., Morgan N., Georgiou P., Narayanan S., Metze F., Amazon Alexa; Apple; eBay; et al.; Google; Microsoft
Formato: CONF
Materias:
Acceso en línea:http://hdl.handle.net/20.500.12110/paper_2308457X_v08-12-September-2016_n_p3002_Ferrer
Aporte de:
Descripción
Sumario:Annotating audio data for the presence and location of speech is a time-consuming and therefore costly task. This is mostly because annotation precision greatly affects the performance of the speech-activity detection (SAD) systems trained with this data, which means that the annotation process must be careful and detailed. Although significant amounts of data are already annotated for speech presence and are available to train SAD systems, these systems are known to perform poorly on channels that are not well-represented by the training data. However obtaining representative audio samples from a new channel is relative easy and this data can be used for training a new SAD system or adapting one trained with larger amounts of mismatched data. This paper focuses on the problem of selecting the best-possible subset of available audio data given a budgeted time for annotation. We propose simple approaches for selection that lead to significant gains over na?ive methods that merely select N full files at random. An approach that uses the framelevel scores from a baseline system to select regions such that the score distribution is uniformly sampled gives the best tradeoff across a variety of channel groups. Copyright © 2016 ISCA.