Toward Fail-Safe Speaker Recognition: Trial-Based Calibration with a Reject Option

The output scores of most of the speaker recognition systems are not directly interpretable as stand-alone values. For this reason, a calibration step is usually performed on the scores to convert them into proper likelihood ratios, which have a clear probabilistic interpretation. The standard calib...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Ferrer, L., Nandwana, M.K., McLaren, M., Castan, D., Lawson, A.
Formato: JOUR
Materias:
Acceso en línea:http://hdl.handle.net/20.500.12110/paper_23299290_v27_n1_p140_Ferrer
Aporte de:
id todo:paper_23299290_v27_n1_p140_Ferrer
record_format dspace
spelling todo:paper_23299290_v27_n1_p140_Ferrer2023-10-03T16:41:01Z Toward Fail-Safe Speaker Recognition: Trial-Based Calibration with a Reject Option Ferrer, L. Nandwana, M.K. McLaren, M. Castan, D. Lawson, A. forensic voice comparison Speaker recognition trial-based calibration Calibration Data structures Logistics Mathematical transformations Personnel training Statistical tests Computational model Forensic voice comparisons Forensics Probabilistic interpretation Similarity metrics Speaker recognition Speaker recognition system Standard calibration Speech recognition The output scores of most of the speaker recognition systems are not directly interpretable as stand-alone values. For this reason, a calibration step is usually performed on the scores to convert them into proper likelihood ratios, which have a clear probabilistic interpretation. The standard calibration approach transforms the system scores using a linear function trained using data selected to closely match the evaluation conditions. This selection, though, is not feasible when the evaluation conditions are unknown. In previous work, we proposed a calibration approach for this scenario called trial-based calibration (TBC). TBC trains a separate calibration model for each test trial using data that is dynamically selected from a candidate training set to match the conditions of the trial. In this work, we extend the TBC method, proposing: 1) a new similarity metric for selecting training data that result in significant gains over the one proposed in the original work; 2) a new option that enables the system to reject a trial when not enough matched data are available for training the calibration model; and 3) the use of regularization to improve the robustness of the calibration models trained for each trial. We test the proposed algorithms on a development set composed of several conditions and on the Federal Bureau of Investigation multi-condition speaker recognition dataset, and we demonstrate that the proposed approach reduces calibration loss to values close to 0 for most of the conditions when matched calibration data are available for selection, and that it can reject most of the trials for which relevant calibration data are unavailable. © 2014 IEEE. JOUR info:eu-repo/semantics/openAccess http://creativecommons.org/licenses/by/2.5/ar http://hdl.handle.net/20.500.12110/paper_23299290_v27_n1_p140_Ferrer
institution Universidad de Buenos Aires
institution_str I-28
repository_str R-134
collection Biblioteca Digital - Facultad de Ciencias Exactas y Naturales (UBA)
topic forensic voice comparison
Speaker recognition
trial-based calibration
Calibration
Data structures
Logistics
Mathematical transformations
Personnel training
Statistical tests
Computational model
Forensic voice comparisons
Forensics
Probabilistic interpretation
Similarity metrics
Speaker recognition
Speaker recognition system
Standard calibration
Speech recognition
spellingShingle forensic voice comparison
Speaker recognition
trial-based calibration
Calibration
Data structures
Logistics
Mathematical transformations
Personnel training
Statistical tests
Computational model
Forensic voice comparisons
Forensics
Probabilistic interpretation
Similarity metrics
Speaker recognition
Speaker recognition system
Standard calibration
Speech recognition
Ferrer, L.
Nandwana, M.K.
McLaren, M.
Castan, D.
Lawson, A.
Toward Fail-Safe Speaker Recognition: Trial-Based Calibration with a Reject Option
topic_facet forensic voice comparison
Speaker recognition
trial-based calibration
Calibration
Data structures
Logistics
Mathematical transformations
Personnel training
Statistical tests
Computational model
Forensic voice comparisons
Forensics
Probabilistic interpretation
Similarity metrics
Speaker recognition
Speaker recognition system
Standard calibration
Speech recognition
description The output scores of most of the speaker recognition systems are not directly interpretable as stand-alone values. For this reason, a calibration step is usually performed on the scores to convert them into proper likelihood ratios, which have a clear probabilistic interpretation. The standard calibration approach transforms the system scores using a linear function trained using data selected to closely match the evaluation conditions. This selection, though, is not feasible when the evaluation conditions are unknown. In previous work, we proposed a calibration approach for this scenario called trial-based calibration (TBC). TBC trains a separate calibration model for each test trial using data that is dynamically selected from a candidate training set to match the conditions of the trial. In this work, we extend the TBC method, proposing: 1) a new similarity metric for selecting training data that result in significant gains over the one proposed in the original work; 2) a new option that enables the system to reject a trial when not enough matched data are available for training the calibration model; and 3) the use of regularization to improve the robustness of the calibration models trained for each trial. We test the proposed algorithms on a development set composed of several conditions and on the Federal Bureau of Investigation multi-condition speaker recognition dataset, and we demonstrate that the proposed approach reduces calibration loss to values close to 0 for most of the conditions when matched calibration data are available for selection, and that it can reject most of the trials for which relevant calibration data are unavailable. © 2014 IEEE.
format JOUR
author Ferrer, L.
Nandwana, M.K.
McLaren, M.
Castan, D.
Lawson, A.
author_facet Ferrer, L.
Nandwana, M.K.
McLaren, M.
Castan, D.
Lawson, A.
author_sort Ferrer, L.
title Toward Fail-Safe Speaker Recognition: Trial-Based Calibration with a Reject Option
title_short Toward Fail-Safe Speaker Recognition: Trial-Based Calibration with a Reject Option
title_full Toward Fail-Safe Speaker Recognition: Trial-Based Calibration with a Reject Option
title_fullStr Toward Fail-Safe Speaker Recognition: Trial-Based Calibration with a Reject Option
title_full_unstemmed Toward Fail-Safe Speaker Recognition: Trial-Based Calibration with a Reject Option
title_sort toward fail-safe speaker recognition: trial-based calibration with a reject option
url http://hdl.handle.net/20.500.12110/paper_23299290_v27_n1_p140_Ferrer
work_keys_str_mv AT ferrerl towardfailsafespeakerrecognitiontrialbasedcalibrationwitharejectoption
AT nandwanamk towardfailsafespeakerrecognitiontrialbasedcalibrationwitharejectoption
AT mclarenm towardfailsafespeakerrecognitiontrialbasedcalibrationwitharejectoption
AT castand towardfailsafespeakerrecognitiontrialbasedcalibrationwitharejectoption
AT lawsona towardfailsafespeakerrecognitiontrialbasedcalibrationwitharejectoption
_version_ 1782026956232982528