该文提出一种基于Gauss混合模型(GMM)托肯配比相似度校正得分(GMM token ratio similarity based score regulation,GTRSR)的说话人识别方法。基于GMM-UBM(通用背景模型)识别框架,在自适应训练和测试阶段计算并保存自适应训练语句和测试语句在UBM上使特征帧得分最高的Gauss分量编号(GMM token)出现的比例(配比),然后在测试阶段计算测试语句和自适应训练语句的GMM托肯分布的配比的相似度GTRS,当GTRS小于某阈值时对测试得分乘以一个惩罚因子,将结果作为测试语句的最终得分。在MASC数据库上进行的实验表明,该方法能够使系统识别性能有一定的提升。
The shapes of speakers' vocal organs change under their different emotional states, which leads to the deviation of the emotional acoustic space of short-time features from the neutral acoustic space and thereby the degradation of the speaker recognition performance. Features deviating greatly from the neutral acoustic space are considered as mismatched features, and they negatively affect speaker recognition systems. Emotion variation produces different feature deformations for different phonemes, so it is reasonable to build a finer model to detect mismatched features under each phoneme. However, given the difficulty of phoneme recognition, three sorts of acoustic class recognition—phoneme classes, Gaussian mixture model(GMM) tokenizer, and probabilistic GMM tokenizer—are proposed to replace phoneme recognition. We propose feature pruning and feature regulation methods to process the mismatched features to improve speaker recognition performance. As for the feature regulation method, a strategy of maximizing the between-class distance and minimizing the within-class distance is adopted to train the transformation matrix to regulate the mismatched features. Experiments conducted on the Mandarin affective speech corpus(MASC) show that our feature pruning and feature regulation methods increase the identification rate(IR) by 3.64% and 6.77%, compared with the baseline GMM-UBM(universal background model) algorithm. Also, corresponding IR increases of 2.09% and 3.32% can be obtained with our methods when applied to the state-of-the-art algorithm i-vector.