added a research item
Automated Evaluation of Handwritten Assessments
Automated evaluation of handwritten answers has been a challenging problem for scaling the education system for many years. Speeding up the evaluation remains as the major bottleneck for enhancing the throughput of instructors. This paper describes an effective method for automatically evaluating the short descriptive handwritten answers from the digitized images. Our goal is to evaluate a student's handwritten answer by assigning an evaluation score that is comparable to the human-assigned scores. Existing works in this domain mainly focused on evaluating handwritten essays with handcrafted, non-semantic features. Our contribution is twofold: 1) we model this problem as a self-supervised, feature-based classification problem, which can fine-tune itself for each question without any explicit supervision. 2) We introduce the usage of semantic analysis for auto-evaluation in handwritten text space using the combination of Information Retrieval and Extraction (IRE) and, Natural Language Processing (NLP) methods to derive a set of useful features. We tested our method on three datasets created from various domains, using the help of students of different age groups. Experiments show that our method performs comparably to that of human evaluators.