This paper describes the medical image retrieval and annotation tasks of ImageCLEF 2006. Both tasks are described with respect
to goals, databases, topics, results, and techniques. The ImageCLEFmed retrieval task had 12 participating groups (100 runs).
Most runs were automatic, with only a few manual or interactive. Purely textual runs were in the majority compared to purely
visual runs but most
... [Show full abstract] were mixed, using visual and textual information. None of the manual or interactive techniques were significantly
better than automatic runs. The best–performing systems used visual and textual techniques combined, but combinations of visual
and textual features often did not improve performance. Purely visual systems only performed well on visual topics. The medical
automatic annotation used a larger database of 10,000 training images from 116 classes, up from 9,000 images from 57 classes
in 2005. Twelve groups submitted 28 runs. Despite the larger number of classes, results were almost as good as in 2005 which
demonstrates a clear improvement in performance. The best system of 2005 would have received a position in the middle in 2006.