A scheme for analyzing the timbre in spatial sound with binaural auditory model is proposed and the Ambisonics is taken as an example for analysis. Ambisonics is a spatial sound system based on physical sound field reconstruction. The errors and timbre colorations in the final reconstructed sound field depend on the spatial aliasing errors on both the recording and reproducing stages of Ambisonics. The binaural loudness level spectra in Ambisonics recon- struction is calculated by using Moore's revised loudness model and then compared with the result of real sound source, so as to evaluate the timbre coloration in Ambisonics quantitatively. The results indicate that, in the case of ideal 'independent signals, the high-frequency limit and radius of region without perceived timbre coloration increase with the order of Ambisonics. On the other hand, in the case of recording by microphone array, once the high-frequency limit of microphone array exceeds that of sound field reconstruction, array recording influences little on the binaural loudness level spectra and thus timbre in final reconstruction up to the high- frequency limit of reproduction. Based on the binaural auditory model analysis, a scheme for optimizing design of Ambisonics recording and reproduction is also suggested. The subjective experiment yields consistent results with those of binaural model, thus verifies the effectiveness of the model analysis.
This paper reports the recent works and progress on a PC and C++ language-based virtual auditory environment(VAE) system platform.By tracing the temporary location and orientation of listener's head and dynamically simulating the acoustic propagation from sound source to two ears,the system is capable of recreating free-field virtual sources at various directions and distances as well as auditory perception in reflective environment via headphone presentation.Schemes for improving VAE performance,including PCA-based(principal components analysis) near-field virtual source synthesis,simulating six degrees of freedom of head movement,are proposed.Especially,the PCA-based scheme greatly reduces the computational cost of multiple virtual sources synthesis.Test demonstrates that the system exhibits improved performances as compared with some existing systems.It is able to simultaneously render up to 280 virtual sources using conventional scheme,and 4500 virtual sources using the PCA-based scheme.A set of psychoacoustic experiments also validate the performance of the system,and at the same time,provide some preliminary results on the research of binaural hearing.The functions of the VAE system is being extended and the system serves as a flexible and powerful platform for future binaural hearing researches and virtual reality applications.
A binaural-loudness-model-based method for evaluating the spatial discrimination threshold of magnitudes of head-related transfer function(HRTF) is proposed.As the input of the binaural loudness model,the HRTF magnitude variations caused by spatial position variations were firstly calculated from a high-resolution HRTF dataset.Then,three perceptualrelevant parameters,namely interaural loudness level difference,binaural loudness level spectra,and total binaural loudness level,were derived from the binaural loudness model.Finally,the spatial discrimination thresholds of HRTF magnitude were evaluated according to just-noticedifference of the above-mentioned perceptual-relevant parameters.A series of psychoacoustic experiments was also conducted to obtain the spatial discrimination threshold of HRTF magnitudes.Results indicate that the threshold derived from the proposed binaural-loudness-modelbased method is consistent with that obtained from the traditional psychoacoustic experiment,validating the effectiveness of the proposed method.
提出一种分析头相关传输函数(head-related transfer function,HRTF)幅度谱的听觉空间分辨阈值模型。采用数值计算得到的高空间分辨率HRTF数据,计算了声源空间位置变化引起的HRTF幅度谱的变化,进一步利用Moore响度模型分析双耳响度级差、双耳响度级谱和总响度级等三个听觉感知量的变化。根据现有的3个听觉感知量最小可察觉差异,模型利用双耳响度级差和双耳响度级谱的变化得到的估计结果与心理声学实验一致,因此是一种有效预测听觉空间分辨阈值的方法,可用于为简化虚拟听觉信号处理和数据储存。