In chess, as well as many other domains, expert feedback is amply available in the form of annotated games. This feedback usually comes in the form of qualitative information because human annotatorsfind it hard to determine precise utility values for game states. Therefore, it is more reasonable to use those annotations for a preference based learning setup, where it is not required to determine
... [Show full abstract] values for the qualitative symbols. We show how game annotations can be used for learning a utility function by translating them to preferences. We evaluate the resulting function by creating multiple heuristics based upon different sized subsets of the training data and compare them in a tournament scenario. The results show that learning from game annotations is possible, but our learned functions did not quite reach the performance of the original, manually tuned function.The reason for this failure seems to lie in the fact that human annotators only annotate \interesting" positions, so that it is hard to learn basic information, such as material advantage from game annotations alone.