Cross-media retrieval is an interesting research topic,which seeks to remove the barriers among different modalities.To enable cross-media retrieval,it is needed to find the correlation measures between heterogeneous low-level features and to judge the semantic similarity.This paper presents a novel approach to learn cross-media correlation between visual features and auditory features for image-audio retrieval.A semi-supervised correlation preserving mapping(SSCPM)method is described to construct the isomorphic SSCPM subspace where canonical correlations between the original visual and auditory features are further preserved.Subspace optimization algorithm is proposed to improve the local image cluster and audio cluster quality in an interactive way.A unique relevance feedback strategy is developed to update the knowledge of cross-media correlation by learning from user behaviors,so retrieval performance is enhanced in a progressive manner.Experimental results show that the performance of our approach is effective.