Fig 3 - uploaded by Stefan Lautenbacher
Content may be subject to copyright.
Detail plot of the used data set. The topmost attributes of the global decision tree model were used as axis. 

Detail plot of the used data set. The topmost attributes of the global decision tree model were used as axis. 

Source publication
Conference Paper
Full-text available
Pain is a highly affective state that is accompanied by a facial expression. In this paper we compare different classiers on their possibilities to classify pain from facial point data. Furthermore we investigate the need for training classiers on each subject's data individually. We show that most used classiers are suitable for the classifying fa...

Context in source publication

Context 1
... is understood that the pain face is not the usual face but a distortion of the neutral face. Taking this into account we averaged the distances, angles and ratios of the non-pain data records and added for every record the ratios of those attribute to their means. As the deformation of the face is potentially highly individual we also added the ratios of those attributes to their means of the considered subject’s non-pain data. Of course all mentioned relational measures are implied in the point data alone. However they would be inaccessible for some classifiers, e.g. Decision Trees (see Sect. 3.1). Hence we decided to include those measures in an explicit form. Based on the data set with 534 entries and 178 attributes (including image id, person number and class) classifiers were trained. In the following the used classifier are briefly described. For further reading see [7] or [8](Support Vector Machines). Decision Trees classify an example according to a set of tree structured if-then- rules. Each inner node of the tree denotes an attribute and branches according to its values. The Decision Tree classifier is the only symbolic classifier we used. We considered using it as the constructed decision model is human under- standable and thus has an explaining element. For learning this trees we used ID3Numerical, a modification of Quinlan’s ID3 [9] which can handle numerical attributes. Naive Bayes is a statistical classifier that is based on Bayes’ Law. It estimates the probability P ( c | x ). For this purpose it assumes that all attributes are statistical independent ( thus naive ). Support Vector Machines try to find a hyperplane – probably in a higher space – that separates the classes. For classification only the data records closest to the hyperplane are used – the Support Vectors . k -Nearest Neighbours ( k NN) considers each data record as points in IR . An unseen instance is now classified searching the k nearest points in the training example space and assigning their mode class. All training examples are kept in the feature space. Therefor k NN is called a lazy learner . The Perceptron considers each data record as a vector of real-valued inputs ( x 1 , . . . , x n ). Those are weighted with ( w 0 , . . . , w n ). For classification it calcu- n lates the linear combination i =0 w i x i , where x 0 is always 1. If the result is greater than 0 it returns positive , negative otherwise. Neural Networks use a graph of sigmoid units to calculate the desired class. Sigmoid units are similar to perceptrons except that they use a sigmoidal output. This output is serving as input for the next unit. In this study we used a linear and acyclic network. Classification by Regression trains a regression model for each class. The class with the higher predicted value is then selected. As base regression model we used linear regression. Linear regression tries to find the linear function that predicts the examples best. For each classifier the learning generally consisted of two phases. First its optimal parameters were obtained then its performance was measured in an cross- validation. As k NN suffers from curse of dimensionality, the attributes were weighted for this classifier before parameter optimization. Cross-Validation was done using 10 partitions. The partitions were drawn according to single data records by means of stratified sampling 4 . Parameters were optimized via a systematic grid search. Parameter combinations were systematically evaluated using 10-folded cross-validations. The combination which performed best was selected. For the Neural Network no systematic approach was chosen. Taking the long training times and the size of the parameter space into account we decided on an evolutionary technique. Attribute Weighting was performed using forward weighting. Initially every attribute was assigned a weight of 0. Each attribute was then independently weighted using a linear search. Attribute Selection was used to overcome Naive Bayes’ assumptions that attributes are statistical independent. Carried out as forward selection, an initial population is created – one individual per attribute. Then further attributes are added to the best ones as long as performance increases. This proceedings were carried out with the whole data set (global) and once for each subject – using only its data (individual classifiers). We ran the experi- ments with RapidMiner 5 on a Fujitsu Siemens Computers LIFEBOOK T Series (Intel c Core TM 2 Duo P8400, 2 GB) and a Fujitsu Siemens Computers Esprimo (Intel R Pentium R 4 3.00 GHz, 1 GB). The results of the different classifiers are shown in Tab. 1. Obviously most classifiers are suitable for this task. Apparently Support Vector Machines and k -nearest Neighbours perform best. Unfortunately it is not possible to build an exact ranking. Due to the number of subjects, no significance tests were made. Decision Tree, Support Vector Machine, Regression, Naive Bayes and k -nearest Neighbours perform similar as adults classifying clinical pain videos[10]. No detailed statement about the Neural Network can be made. The global parameter optimization crashed after 10 days. Since no interim results were available performance estimation was done with manually chosen parameters. A repetition of the parameter optimization was not possible due to time limitations. The detail plot in Fig. 3 shows that the data is probably not linear separable. This explains the bad performance of the Perceptron classifier. It performs even worse than guessing 6 . For most classifiers the individual approach seems to deliver better results. But— due to the small number of subjects—also here no detailed comparison is possible. We showed that many classifiers are suitable for predicting pain by facial expression. Global and individual classifiers perform nearly equally. But due to the low subject number and since we only selected those subjects that showed prototypical facial pain displays[11] it is by now questionable if these findings can be generalized. Currently we are able to decide if a shown face is prototypical for pain or not. However, the true question is whether the person whose face is depicted is experiencing pain. Therefore we will deal with predicting the self-reported VAS values in further work. Additionally we will tackle the issue of mixing up different emotions, first trying to distinguish between pain and disgust. Therefore we will use more subjects for the initial psychological study. Further studies should also address a broader variety of subjects—for example regarding age or gender. Acknowledgements We thank Simone Burkhardt and Ina Schulz for the data collection and picture extraction. The study was supported by the Dr. Werner ...

Similar publications

Article
Full-text available
Fibromyalgia (FM) is a generalized chronic pain condition associated with a variety of symptoms, including altered cognitive and emotional processing. It has been proposed that FM patients show a preferential allocation of attention to information related to the symptoms of the disease, particularly to pain cues. However, the existing literature do...
Article
Full-text available
Simple Summary Fatal injuries in Thoroughbred racehorses typically occur due to an accumulation of bone damage, however, detecting their impending onset can be difficult as there are often no overt signs. In other horse populations, facial grimacing has been shown to be associated with orthopaedic pain. This study, therefore, aimed to investigate f...
Article
Full-text available
Earlier research studying the effects of social threat on the experience and expression of pain led to mixed results. In this study, female participants (N = 32) came to the lab with two confederates. Both confederates administered a total of 10 painful electrocutaneous stimuli to the participant. The framing of the administration was manipulated i...
Article
Full-text available
The function of empathic concern to process pain is a product of evolutionary adaptation. Focusing on 5- to 6-year old children, the current study employed eye-tracking in an odd-one-out task (searching for the emotional facial expression among neutral expressions, N = 47) and a pain evaluation task (evaluating the pain intensity of a facial expres...
Conference Paper
Full-text available
The study investigated how psychological factor (negative affect) and physiological reaction predicted the intensity of pain perceived and facial expressed by two groups of socioeconomic status (lower and upper SES). The study involved 201 participants (97 lower SES and 104 upper SES). All participants were healthy and pain-free. The pain was induc...

Citations

... [48]). In the first case, a training set needs to include AU sequences observed for pain episodes as well as for nonpain episodes (e.g., disgust), the classifier is trained such that the rules have high sensitivity as well as high specificity for pain [49,50]. In the second case, only pain episodes characteristic patterns are identified [50]. ...
Article
Full-text available
Background Given the unreliable self-report in patients with dementia, pain assessment should also rely on the observation of pain behaviors, such as facial expressions. Ideal observers should be well trained and should observe the patient continuously in order to pick up any pain-indicative behavior; which are requisitions beyond realistic possibilities of pain care. Therefore, the need for video-based pain detection systems has been repeatedly voiced. Such systems would allow for constant monitoring of pain behaviors and thereby allow for a timely adjustment of pain management in these fragile patients, who are often undertreated for pain. Methods In this road map paper we describe an interdisciplinary approach to develop such a video-based pain detection system. The development starts with the selection of appropriate video material of people in pain as well as the development of technical methods to capture their faces. Furthermore, single facial motions are automatically extracted according to an international coding system. Computer algorithms are trained to detect the combination and timing of those motions, which are pain-indicative. Results/conclusionWe hope to encourage colleagues to join forces and to inform end-users about an imminent solution of a pressing pain-care problem. For the near future, implementation of such systems can be foreseen to monitor immobile patients in intensive and postoperative care situations.
Article
Pain sensation is essential for survival, since it draws attention to physical threat to the body. Pain assessment is usually done through self-reports. However, self-assessment of pain is not available in the case of noncommunicative patients, and therefore, observer reports should be relied upon. Observer reports of pain could be prone to errors due to subjective biases of observers. Moreover, continuous monitoring by humans is impractical. Therefore, automatic pain detection technology could be deployed to assist human caregivers and complement their service, thereby improving the quality of pain management, especially for noncommunicative patients. Facial expressions are a reliable indicator of pain, and are used in all observer-based pain assessment tools. Following the advancements in automatic facial expression analysis, computer vision researchers have tried to use this technology for developing approaches for automatically detecting pain from facial expressions. This paper surveys the literature published in this field over the past decade, categorizes it, and identifies future research directions. The survey covers the pain datasets used in the reviewed literature, the learning tasks targeted by the approaches, the features extracted from images and image sequences to represent pain-related information, and finally, the machine learning methods used.