为研究信号相关性在语音情感识别中的作用,提出了一种面向语音情感识别的语谱图特征提取算法.首先,对语谱图进行处理,得到归一化后的语谱图灰度图像;然后,计算不同尺度、不同方向的Gabor图谱,并采用局部二值模式提取Gabor图谱的纹理特征;最后,将不同尺度、不同方向Gabor图谱提取到的局部二值模式特征进行级联,作为一种新的语音情感特征进行情感识别.柏林库(EMO-DB)及FAU Ai Bo库上的实验结果表明:与已有的韵律、频域、音质特征相比,所提特征的识别率提升3%以上;与声学特征融合后,所提特征的识别率较早期声学特征至少提高5%.因此,利用这种新的语音情感特征可以有效识别不同种类的情感语音.
To solve the problem of mismatching features in an experimental database, which is a key technique in the field of cross-corpus speech emotion recognition, an auditory attention model based on Chirplet is proposed for feature extraction.First, in order to extract the spectra features, the auditory attention model is employed for variational emotion features detection. Then, the selective attention mechanism model is proposed to extract the salient gist features which showtheir relation to the expected performance in cross-corpus testing.Furthermore, the Chirplet time-frequency atoms are introduced to the model. By forming a complete atom database, the Chirplet can improve the spectrum feature extraction including the amount of information. Samples from multiple databases have the characteristics of multiple components. Hereby, the Chirplet expands the scale of the feature vector in the timefrequency domain. Experimental results show that, compared to the traditional feature model, the proposed feature extraction approach with the prototypical classifier has significant improvement in cross-corpus speech recognition. In addition, the proposed method has better robustness to the inconsistent sources of the training set and the testing set.