Conference Paper

Detecting demeanor for healthcare with machine learning

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The outbreak prediction capability, medical imaging diagnosis, behavioural modifications, records of patient data, etc., are some of the majorly elaborated quality pillars of the renowned ML concept, which further extends its services for the benefit of society through healthcare services. These ML attributes' effectiveness and undoubtfully performance provide all the essential foundations while there is a need for these services in healthcare practices [58,59]. ...
Article
Full-text available
Machine Learning (ML) applications are making a considerable impact on healthcare. ML is a subtype of Artificial Intelligence (AI) technology that aims to improve the speed and accuracy of physicians' work. Countries are currently dealing with an overburdened healthcare system with a shortage of skilled physicians, where AI provides a big hope. The healthcare data can be used gainfully to identify the optimal trial sample, collect more data points, assess ongoing data from trial participants, and eliminate data-based errors. ML-based techniques assist in detecting early indicators of an epidemic or pandemic. This algorithm examines satellite data, news and social media reports, and even video sources to determine whether the sickness will become out of control. Using ML for healthcare can open up a world of possibilities in this field. It frees up healthcare providers' time to focus on patient care rather than searching or entering information. This paper studies ML and its need in healthcare, and then it discusses the associated features and appropriate pillars of ML for healthcare structure. Finally, it identified and discussed the significant applications of ML for healthcare. The applications of this technology in healthcare operations can be tremendously advantageous to the organisation. ML-based tools are used to provide various treatment alternatives and individualised treatments and improve the overall efficiency of hospitals and healthcare systems while lowering the cost of care. Shortly, ML will impact both physicians and hospitals. It will be crucial in developing clinical decision support, illness detection, and personalised treatment approaches to provide the best potential outcomes.
... Detected the mood of patients by implementing an intelligent real-sense camera system prototype. ML, an SVM, and the RealSense facial detection system can be utilised to track patient demeanour for pain monitoring [265]. ...
Article
Full-text available
Due to the rapid development of the fifth-generation (5G) applications, and increased demand for even faster communication networks, we expected to witness the birth of a new 6G technology within the next ten years. Many references suggested that the 6G wireless network standard may arrive around 2030. Therefore, this paper presents a critical analysis of 5G wireless networks’, significant technological limitations and reviews the anticipated challenges of the 6G communication networks. In this work, we have considered the applications of three of the highly demanding domains, namely: energy, Internet-of-Things (IoT) and machine learning. To this end, we present our vision on how the 6G communication networks should look like to support the applications of these domains. This work presents a thorough review of 370 papers on the application of energy, IoT and machine learning in 5G and 6G from three major libraries: Web of Science, ACM Digital Library, and IEEE Explore. The main contribution of this work is to provide a more comprehensive perspective, challenges, requirements, and context for potential work in the 6G communication standard.
... Sensor Enabled Affective Computing for Enhancing Medical Care (SenseCare) is a 4-year project funded by the European Union (EU), that applies Affective Computing to enhance and advance future healthcare processes and systems, especially in providing assistance to people with dementia, medical professionals, and caregivers [2]. By gathering activity and related sensor data to infer the emotional state of the patient as a knowledge stream of emotional signals, SenseCare can provide a basis for enhanced care and can alert medics, professional carer, and family members to situations where intervention is required [3] [4]. ...
Chapter
Emotion recognition has recently attracted much attention in both industrial and academic research as it can be applied in many areas from education to national security. In healthcare, emotion detection has a key role as emotional state is an indicator of depression and mental disease. Much research in this area focuses on extracting emotion related features from images of the human face. Nevertheless, there are many other sources that can identify a person’s emotion. In the context of MENHIR, an EU-funded R&D project that applies Affective Computing to support people in their mental health, a new emotion-recognition system based on speech is being developed. However, this system requires comprehensive data-management support in order to manage its input data and analysis results. As a result, a cloud-based, high-performance, scalable, and accessible ecosystem for supporting speech-based emotion detection is currently developed and discussed here.
... The paper summarised typical AI algorithms to enhance cellular networks. [39] aimed to provide such eHealth support to medical emergency first responders by adopting a machine algorithm to detect a patient's demeanour on the scene of an incident using Intel RealSense camera system. The implementation and evaluation were carried out in a lab setting, the authors stated the patient condition could be captured and detected at the patient location using 5G mobile edge computing. ...
Article
Full-text available
In 2019, 5G was introduced and it is being gradually deployed all over the world. 5G introduces new concepts, such as network slicing to better support various applications with different performance requirements on data rate and latency; and edge and cloud computing that will be responsible for the leverage of computational requirements. This study aims to describe the functions and features of the key 5G technologies and conduct a survey on the latest development of driving technologies for 5G. This survey focuses on health care applications that would benefit from the advantages brought by 5G.
... In [12], authors described a prototype system that uses ML, SVM for pain monitoring in emergency situation using camera (Intel Realsense). But we did not find any healthcare monitoring system using ML for 5G cellular network focusing autism centers. ...
... Sensor Enabled Affective Computing for Enhancing Medical Care (SenseCare) is a 48-month project funded by the European Union, that aims to apply AC to enhance and advance future healthcare processes and systems, especially in providing assistance to people with dementia, medical professionals, and care givers [2]. By gathering activity and related sensor data to infer the emotional state of the patient as a knowledge stream of emotional signals, SenseCare can provide a basis for enhanced care and can alert medics, professional care taking staff, and care taking family members to situations where intervention is required [3] [4]. 1 https://www.csie.ntu.edu.tw/~cjlin/libsvm/ 2 http://www.consortium.ri.cmu.edu/ckagree/ One of the systems developed in SenseCare is a machine-learning-based emotion detection platform [5], which is used to provide an early insight into the emotional state of an observed person. ...
Conference Paper
Full-text available
Affective Computing is a rather new and multidisciplinary research field that seeks for sophisticated automation in emotion detection for later analysis. However, the automated emotion detection and analysis require as well comprehensive data management support e.g. to keep control about produced data, to enable its efficient reuse through classification with established terminology. This paper will contribute to data management aspects in Affective Computing and to automation support in emotion classification on basis of a personal traits analysis. In this paper, we therefore describe the implementation of a taxonomy management system, derived from requirements of a case study that investigates the relationship between personality and emotions in Affective Computing. The study make use of machine learning software developed by SenseCare, an EU-funded R&D project that applies Affective Computing to enhance and advance future healthcare processes and systems.
... Affective computing is an emerging field that attempts to model technology to detect, predict, and display emotions in the goal of improving human-computer interactions [15], [16]. One example of affective computing in action is the SenseCare project, which aims to integrate multiple methods of emotion detection, in order to provide objective insight into people's well-being [17], [18]. Another example is the SliceNet project (https://5g-ppp.eu/slicenet/), ...
Conference Paper
Full-text available
This paper describes a new emotional detection system based on a video feed in real-time. It demonstrates how a bespoke machine learning support vector machine (SVM) can be utilized to provide quick and reliable classification. Features used in the study are 68-point facial landmarks. In a lab setting, the application has been trained to detect six different emotions by monitoring changes in facial expressions. Its utility as a basis for evaluating the emotional condition of people in situations using video and machine learning is discussed.
... Mobile multimedia system for healthcare is important for resource and information management. Meanwhile, Internet of Things (IoT) are now gaining recognition among the health stakeholders as powerful enabling technologies for ubiquitous and widespread healthcare monitoring [1]. Which can make better decisions on patient's diagnoses and lead to overall improvement of healthcare services. ...
Article
This paper proposes a new deep learning method, the greedy deep weighted dictionary learning (GDWDL) for mobile multimedia for medical diseases analysis. Based on the traditional dictionary learning methods, which neglects the relationship between the sample and the dictionary atom, we propose the weighted mechanism to connect the sample with the dictionary atom in this paper. Meanwhile, the traditional dictionary learning method is prone to cause over-fitting for patient classification of the limited training data set. Therefore, this paper adopts ℓ2-normregularization constraint, which realizes the limitation of the model space, and enhances the generalization ability of the model and avoids over-fitting to some extent. Compared with the previous shallow dictionary learning, this paper proposed the greedy deep dictionary learning. We adopt the thinking of layer by layer training to increase the hidden layer, so that the local information between the layer and the layer can be trained to maintain their own characteristics, reduce the risk of over-fitting and make sure that each layer of the network is convergent, which improves the accuracy of training and learning. With the development of Internet of Things (IoT) and the soundness of healthcare monitoring system, the method proposed have better reliability in the field of mobile multimedia for healthcare. The results show that the learning method has a good effect on the classification of mobile multimedia for medical diseases, and the accuracy, sensitivity and specificity of the classification have good performance, which may provide guidance for the diagnosis of disease in wisdom medical. OAPA
Chapter
Full-text available
Healthcare is the cardinal component on which the foundation of human welfare can be laid. Healthcare research mainly focuses on the healthy living standards of individuals. Relationship between pulmonary embolism and cardiac arrest is presented in this paper. The proposed research is divided into two phases. The first phase includes the establishment of connectivity between the two medical fields which is done by finding out the relationship between the pulse pressure and stroke volume. The second phase includes the application and comparison of machine learning algorithms on the above-formed connectivity. Univariate technique of feature selection is performed initially in order to get the most relevant attributes. Overfitting problem has been addressed by formulating an ensemble model. Also the comparison between the boosting and bagging classifier has been done.
Chapter
The research has focused on breast cancer prediction using enhanced convolution neural network (CNN). The data set related to breast cancer has been considered during this research. The convolution neural network implementation has been made to predict breast cancer. CNN mechanism classifies image and breaks it down into features, reconstructed and predicted at the end. The edge-based samples have been considered to reduce the comparison time and space. This results in an increased accuracy. The introduction section introduced basic concepts of breast cancer prediction system. The existing researches in a relevant field have been represented in the second section. The motivation and challenges to research have been explained afterward. Later proposed work and results are representing the simulation of work. Simulation results have shown that edge-based image processing in the convolution neural network has reduced the time and space. The accuracy has been also increased.
Chapter
Alzheimer’s disease (AD) refers to a neurodegenerative chronic disease. Difficulties to recall current events, daily tasks schedules, eye vision problem, fail to maintain daily routine, and problems to read and speak new languages are the most common symptoms (early-mid level) of AD. Magnetic resonance imaging (MRI) is very popular for detection of AD. There are numerous research works which are available for early detection of AD. But we have found lack of concentration to detect AD and assist AD patients using Internet of things (IoT) devices inside smart home focusing 5G wireless network. In this paper, we have proposed AlziHelp: An Alzheimer disease detection and assistive system inside smart home focusing 5G using IoT and machine learning approaches. In our system, AD detection can be done easily using smart IoT devices inside smart home in 5G environment. Also the system is capable to assist AD patients using machine learning (ML) approaches. Monitoring daily tasks, reaction time to take an actions, mismatches in serials of actions will be taken as input in our system and using k-nearest neighbor (K-NN), our system can easily detect AD. Also, the system can assist an AD patient to perform his/her daily tasks by predicting events and actions. We strongly believe that AzliHelp can contribute to detect AD and assist people with AD so that they can live a normal life inside home.
Chapter
This research proposes machine learning algorithms in conjunction with cognitive-based networking as a remote patient monitoring framework for accurately predicting disease state and disease parameters from remotely monitored and measured patient biometric and biomedical signals. This system would facilitate doctors and clinicians by providing hospitals machine learning-based predictive clinical decision support systems to remotely monitor patients and their diseases. In this proposed work, a cognitive radio (CR) network is simulated for optimization of spectrum sensing and energy detection. Further, two effective classification methods are evaluated on remotely measured physiological parameters, such as blood pressure and heart rate, of patients with two types of diseases—chronic kidney disease and heart disease. First, a support vector machine (SVM) model was trained on a heart disease dataset with inputs and binary targets. The disease parameter correlations between blood pressure and age, heart rate, and blood glucose level results were plotted and their relationships were modeled using SVM. Second, the artificial neural network (ANN) algorithm was employed for the detection of disease state with the two types of disease datasets—heart disease and chronic kidney diagnosis. With SVM, the accuracy was around 60% for heart disease and 84% for chronic kidney disease patients. The percentage of accurately categorized patients with ANN was observed to be 95% overall in estimate for heart disease and 93% overall in estimate for chronic kidney disease. ANN is more accurate and recommended for predictive modeling of patient data in the proposed cognitive IoT remote patient monitoring system.
Conference Paper
Full-text available
This paper describes a new prototype for using the Intel RealSenseTM (Intel, 2017) commercial camera system, a support vector machine (SVM) and multiple facial expression databases to predict emotional states with high accuracy. The system, called the Mobile Agitation Tracker, aims to be used for the detection of agitation and discomfort for patients suffering from cognitive decline disorders such as dementia. By mapping the Intel RealSense’s 78 landmark points to an existing 68-point format we were successful in using existing facial expression databases as training data into machine learning algorithms. An experiment conducted by our research team also proved the effectiveness of the application.
Conference Paper
Full-text available
The MAT project aims to develop and evaluate an initial set of algorithms that can detect agitation, restlessness and aggression in dementia patients. MAT uses vision based analytics to track a subjects facial expressions in real-time. The first version of MAT has implemented two use cases for the detection of restlessness and aggression. The project also has the potential to be used in more advanced machine learning and data analytics applications typically for research purposes on elder care. Data sets for each subject’s monitoring period are generated in CSV file format which could be used to populate a database or as inputs to machine learning classification algorithms/platforms.
Article
Full-text available
We present a new approach to automatically recognize the pain expression from video sequences, which categorize pain as 4 levels: "no pain," "slight pain," "moderate pain," and " severe pain." First of all, facial velocity information, which is used to characterize pain, is determined using optical flow technique. Then visual words based on facial velocity are used to represent pain expression using bag of words. Final pLSA model is used for pain expression recognition, in order to improve the recognition accuracy, the class label information was used for the learning of the pLSA model. Experiments were performed on a pain expression dataset built by ourselves to test and evaluate the proposed method, the experiment results show that the average recognition accuracy is over 92%, which validates its effectiveness.
Article
Full-text available
A close relationship exists between the advancement of face recognition algorithms and the availability of face databases varying factors that affect facial appearance in a controlled manner. The CMU PIE database has been very influential in advancing research in face recognition across pose and illumination. Despite its success the PIE database has several shortcomings: a limited number of subjects, a single recording session and only few expressions captured. To address these issues we collected the CMU Multi-PIE database. It contains 337 subjects, imaged under 15 view points and 19 illumination conditions in up to four recording sessions. In this paper we introduce the database and describe the recording procedure. We furthermore present results from baseline experiments using PCA and LDA classifiers to highlight similarities and differences between PIE and Multi-PIE.
Article
Full-text available
Pain is typically assessed by patient self-report. Self-reported pain, however, is difficult to interpret and may be impaired or in some circumstances (i.e., young children and the severely ill) not even possible. To circumvent these problems behavioral scientists have identified reliable and valid facial indicators of pain. Hitherto, these methods have required manual measurement by highly skilled human observers. In this paper we explore an approach for automatically recognizing acute pain without the need for human observers. Specifically, our study was restricted to automatically detecting pain in adult patients with rotator cuff injuries. The system employed video input of the patients as they moved their affected and unaffected shoulder. Two types of ground truth were considered. Sequence-level ground truth consisted of Likert-type ratings by skilled observers. Frame-level ground truth was calculated from presence/absence and intensity of facial actions previously associated with pain. Active appearance models (AAM) were used to decouple shape and appearance in the digitized face images. Support vector machines (SVM) were compared for several representations from the AAM and of ground truth of varying granularity. We explored two questions pertinent to the construction, design and development of automatic pain detection systems. First, at what level (i.e., sequence- or frame-level) should datasets be labeled in order to obtain satisfactory automatic pain detection performance? Second, how important is it, at both levels of labeling, that we non-rigidly register the face?
Conference Paper
Full-text available
A major factor hindering the deployment of a fully functional automatic facial expression detection system is the lack of representative data. A solution to this is to narrow the context of the target application, so enough data is available to build robust models so high performance can be gained. Automatic pain detection from a patient's face represents one such application. To facilitate this work, researchers at McMaster University and University of Northern British Columbia captured video of participant's faces (who were suffering from shoulder pain) while they were performing a series of active and passive range-of-motion tests to their affected and unaffected limbs on two separate occasions. Each frame of this data was AU coded by certified FACS coders, and self-report and observer measures at the sequence level were taken as well. This database is called the UNBC-McMaster Shoulder Pain Expression Archive Database. To promote and facilitate research into pain and augment current datasets, we have publicly made available a portion of this database which includes: (1) 200 video sequences containing spontaneous facial expressions, (2) 48,398 FACS coded frames, (3) associated pain frame-by-frame scores and sequence-level self-report and observer measures, and (4) 66-point AAM landmarks. This paper documents this data distribution in addition to describing baseline results of our AAM/SVM system. This data will be available for distribution in March 2011.
Article
Full-text available
In this paper, we present a robust approach for pain expression recognition from video sequences. An automatic face detector is employed which uses skin color modeling to detect human face in the video sequence. The pain affected portions of the face are obtained by using a mask image. The obtained face images are then projected onto a feature space, defined by Eigenfaces, to produce the biometric template. Pain recognition is performed by projecting a new image onto the feature spaces spanned by the Eigenfaces and then classifying the painful face by comparing its position in the feature spaces with the positions of known individuals.
Article
Full-text available
We determine how the alert/verbal/painful/unresponsive (AVPU) responsiveness scale (alert, responsive to verbal stimulation, responsive to painful stimulation, and unresponsive) corresponds to the Glasgow Coma Scale (GCS) when assessing consciousness level in the poisoned patient. Consciousness level was assessed using the AVPU responsiveness scale and the GCS in all patients admitted to the hospital during a 6-month period with deliberate or accidental poisoning. An AVPU responsiveness scale algorithm and details of the individual components of the GCS were provided. Data were recorded prospectively on admission to the toxicology ward by nursing staff in the majority of cases and from case records for the small number of patients admitted directly to the ICU. Nursing staff also recorded any difficulty assessing consciousness level using either scoring system. Of the 1,384 patients studied, 1,138 patients were alert, 114 patients responded to a verbal stimulus, 87 patients responded to a painful stimulus, and 15 patients were unresponsive. The median GCS scores with interquartile ranges (IQR) for each AVPU responsiveness category were 15 (IQR 15), 13 (IQR 12 to 14), 8 (IQR 7 to 9), and 3 (IQR 3), respectively. There was a degree of overlap between the range of GCS scores for each category. Nursing staff recorded more difficulty using the GCS than the AVPU responsiveness scale. Alcohol-intoxicated patients proved to be the most difficult to assess. All patients who were unresponsive required intubation. No patient with a GCS score greater than 6 was intubated. Each AVPU category can be shown to correspond to a range of GCS scores. The AVPU responsiveness scale appears to provide a rapid simple method of assessing consciousness level in most poisoned patients, but difficulty was still observed in assessing alcohol-intoxicated patients.
Book
This book reports on the latest advances in the modeling, analysis and efficient management of information in Internet of Things (IoT) applications in the context of 5G access technologies. It presents cutting-edge applications made possible by the implementation of femtocell networks and millimeter wave communications solutions, examining them from the perspective of the universally and constantly connected IoT. Moreover, it describes novel architectural approaches to the IoT and presents the new framework possibilities offered by 5G mobile networks, including middleware requirements, node-centrality and the location of extensive functionalities at the edge. By providing researchers and professionals with a timely snapshot of emerging mobile communication systems, and highlighting the main pitfalls and potential solutions, the book fills an important gap in the literature and will foster the further developments of 5G hosting IoT devices.
Chapter
Mission critical communications have been traditionally provided with proprietary communication systems (like Tetra), offering a limited set of capabilities, and mainly targeting voice services. Nevertheless, the current explosion of mobile communications and the need for increased performance and availability especially in mission critical scenarios, require a broad type of services to be available for these platforms. In this sense the LTE technology is very promising, as it provides mechanisms to enforce QoS, has standardized many useful functions in public safety scenarios (like group communications, positioning services, etc.), while it is being evolved to match future 5G requirements. The Q4Health project aims to prepare for market and optimize the BlueEye system, a video service platform for first responders. In our approach we use two FIRE+ platforms for demonstrations: OpenAirInterface and PerformNetworks. Q4Health is driving the optimization of the system with the execution of a set of experiments focusing on a different aspect of the network (core network, radio access and user equipment) and aims to cover current LTE standards, but also future 5G enhancements. The project’s outcomes will be the optimization of the overall BlueEye system and the enrichment of the involved FIRE+ facilities with more services, functions and programmability.
Article
A close relationship exists between the advancement of face recognition algorithms and the availability of face databases varying factors that affect facial appearance in a controlled manner. The CMU PIE database has been very influential in advancing research in face recognition across pose and illumination. Despite its success the PIE database has several shortcomings: a limited number of subjects, single recording session and only few expressions captured. To address these issues we collected the CMU Multi-PIE database. It contains 337 subjects, imaged under 15 view points and 19 illumination conditions in up to four recording sessions. In this paper we introduce the database and describe the recording procedure. We furthermore present results from baseline experiments using PCA and LDA classifiers to highlight similarities and differences between PIE and Multi-PIE.
Article
The present study examined psychometric properties of facial expressions of pain. A diverse sample of 129 people suffering from shoulder pain underwent a battery of active and passive range-of-motion tests to their affected and unaffected limbs. The same tests were repeated on a second occasion. Participants rated the maximum pain induced by each test on three self-report scales. Facial actions were measured with the Facial Action Coding System. Several facial actions discriminated painful from non-painful movements; however, brow-lowering, orbit tightening, levator contraction and eye closing appeared to constitute a distinct, unitary action. An index of pain expression based on these actions demonstrated test-retest reliability and concurrent validity with self-reports of pain. The findings support the concept of a core pain expression with desirable psychometric properties. They are also consistent with the suggestion of individual differences in pain expressiveness. Reasons for varying reports of relations between pain expression and self-reports in previous studies are discussed.
Telemedicine and Facility Design
  • R H A J Looney
  • Hfm Magazine
How 5G technology enables the health internet of things
  • D M West
D. M. West, "How 5G technology enables the health internet of things," Brookings, https://goo.gl/6n3awK
A new generation of the health system is powered by 5G
  • C Politis
C. Politis, "A new generation of the health system is powered by 5G," 2016.[Online].Available: https://goo.gl/FKRBn9.
What 5G Will Mean for You
  • M Scott
M. Scott, What 5G Will Mean for You, New York Times, 2016.
Mission Critical Communications Over LTE and Future 5G Technologies
  • C A A R P M K K N N R F D M T O P R Garcia-Perez
C. A. A. R. P. M. K. K. N. N. R. F. D. M. T. O. a. P. R. Garcia-Perez, "Mission Critical Communications Over LTE and Future 5G Technologies.," in Q4Health.
Develop Immersive Experiences
  • Intel
Intel, "Develop Immersive Experiences," 2017. [Online]. Available: https://software.intel.com/en-us/realsense/home. [Accessed 11 07 2017].
Specifications for the Intel® RealSense™ Camera F200
  • Intel
Intel, "Specifications for the Intel® RealSense™ Camera F200," [Online]. Available: https://communities.intel.com/docs/DOC-24012. [Accessed 11 07 2017].
Facial Action Coding System:," in Network Research Information
  • W F A J H P Ekman
W. F. a. J. H. P. Ekman, "Facial Action Coding System:," in Network Research Information, Salt Lake City, UT, 2002.
Comparison of consciousness level assessment in the poisoned patient using the alert/verbal/painful/unresponsive scale and the Glasgow Coma Scale
  • C A A U D N B Kelly