This paper presents our methods for the tasks of bilingual ad hoc retrieval and automatic annotation in ImageCLEF 2005. In ad hoc task, we propose a feedback method for cross-media translation in a visual run, and combine the results of visual and textual runs to generate the final result. Experimental results show that our feedback method performs well. Comparing to initial visual retrieval,
... [Show full abstract] average precision is increased from 8% to 34% after feedback. The performance is increased to 39% if we combine the results of textual run and visual run with pseudo relevance feedback. In automatic annotation task, we propose several methods to measure the similarity between a test image and a category, and a test image is classified to the most similar category. Experimental results show that the proposed approaches have good performance, but the simplest 1-NN method has the best performance. We will analyze these results in the paper.