Conference Paper

Recognition of unscripted kitchen activities and eating behaviour for health monitoring

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Nutrition related health conditions can seriously decrease the quality of life, and a system able to monitor the kitchen activities and eating behaviour of patients could provide clinicians important information towards the improvement of the patient’s condition. We propose a symbolic model able to describe unscripted kitchen activities and eating behaviour of people in home settings. The model consists of an ontology that describes the problem domain and of a computational state space model that is able to reason in a probabilistic manner about the person’s actions, goals, and causes of problems during action execution. To validate our model, we recorded 15 unscripted kitchen tasks involving 9 subjects and manually annotated the video data according to the proposed ontology schema. We then compared the model’s ability to recognise people’s activities and their goals by generating simulated noisy observations from the annotation of the experiments. The results showed that our model is able to recognise kitchen activities with an average accuracy of 0.8, when using specialised models, and 0.4 when using the general model.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... In a previous work, we showed preliminary results of a CSSM approach called Computational Causal Behaviour Models (CCBM), which reasons about one's activities in real cooking scenarios [14,38]. In this work, we extend our previous work by providing detailed information on the approach and the developed model. ...
... The types of meals are also listed there. The annotation was later used to simulate data for the experiments reported in [38]. It was also used as a ground truth in this work during the model evaluation as well as for training the observation model for the CCBM. ...
... Rectangles show objects; ellipses describe the object types; and arrows indicate the hierarchy or "is-a" relation (the arrow points to the father class). Figure adapted from[38]. ...
Article
Full-text available
Wellbeing is often affected by health-related conditions. Among them are nutrition-related health conditions, which can significantly decrease the quality of life. We envision a system that monitors the kitchen activities of patients and that based on the detected eating behaviour could provide clinicians with indicators for improving a patient’s health. To be successful, such system has to reason about the person’s actions and goals. To address this problem, we introduce a symbolic behaviour recognition approach, called Computational Causal Behaviour Models (CCBM). CCBM combines symbolic representation of person’s behaviour with probabilistic inference to reason about one’s actions, the type of meal being prepared, and its potential health impact. To evaluate the approach, we use a cooking dataset of unscripted kitchen activities, which contains data from various sensors in a real kitchen. The results show that the approach is able to reason about the person’s cooking actions. It is also able to recognise the goal in terms of type of prepared meal and whether it is healthy. Furthermore, we compare CCBM to state-of-the-art approaches such as Hidden Markov Models (HMM) and decision trees (DT). The results show that our approach performs comparable to the HMM and DT when used for activity recognition. It outperformed the HMM for goal recognition of the type of meal with median accuracy of 1 compared to median accuracy of 0.12 when applying the HMM. Our approach also outperformed the HMM for recognising whether a meal is healthy with a median accuracy of 1 compared to median accuracy of 0.5 with the HMM.
... What is more, so far CSSMs have been used for goal recognition only based on simulated data [3], [8]. In a previous work we proposed a CSSM model that is able to recognise the protagonist's activities during unscripted kitchen tasks [12]. The model was tested on simulated sensor data. ...
... To reduce the impact of this artefact on the model performance, a sliding window of 5 time steps with overlapping of 50% was used and the observations in this window were represented by the maximum value for each sensor in the window. c) CCBM Models: In this work we use extended version of the model proposed in [12] where they use it for activity recognition on simulated data. Here, we extend the model by adding probabilistic action durations and goal recognition and use it with real sensor data for following different goals. ...
... The model dimensions for the two model implementations can be seen in Table I. Some additional discussion on the models can be found in [12]. Goals in the model: The model has three types of goals: 1) the type of meal the person is preparing (13 goals); 2) whether the meal / drink is healthy or not (4 goals); 3) whether the person is depressed or not (2 goals). ...
Conference Paper
Full-text available
Nutrition related health conditions can seriously decrease quality of life; a system able to monitor the kitchen activities and eating behaviour of patients could provide clinicians with important indicators for improving a patient’s condition. To achieve this, the system has to reason about the person’s actions and goals. To address this challenge, we present a behaviour recognition approach that relies on symbolic behaviour repre- sentation and probabilistic reasoning to recognise the person’s actions, the type of meal being prepared and its potential impact on a patient’s health. We test our approach on a cooking dataset containing unscripted kitchen activities recorded with various sensors in a real kitchen. The results show that the approach is able to recognise the sequence of executed actions and the prepared meal, to determine whether it is healthy, and to reason about the possibility of depression based on the type of meal.
... Quantity and quality of food intake are particularly crucial factors contributing to a healthy lifestyle [23]. An unhealthy diet may lead to nutrition-related diseases, which in turn can reduce the quality of life [21]. A system able to monitor people's cooking and, thus, eating behavior could provide insightful information to the user towards the improvement of their health. ...
... Several researchers have addressed the specific problem of recognizing cooking activities [4,13,16,23,18,19,21,22]. Pham et al. [4] propose a real-time approach to classify fine-grained cooking activities such as, e.g., peeling, slicing and dicing, using accelerometer data. ...
Chapter
In this paper, we present an automatic approach to recognize cooking activities from acceleration and motion data. We rely on a dataset that contains three-axis acceleration and motion data collected with multiple devices, including two wristbands, two smartphones and a motion capture system. The data is collected from three participants while preparing sandwich, fruit salad and cereal recipes. The participants performed several fine-grained activities while preparing each recipe such as cut and peel. We propose to use the multi-class classification approach to distinguish between cooking recipes and a multi-label classification approach to identify the fine-grained activities. Our approach achieves 81% accuracy to recognize fine-grained activities and 66% accuracy to distinguish between different recipes using leave-one-subject-out cross-validation. The multi-class and multi-label classification results are 27 and 50% points higher than the baseline. We further investigate the effect on classification performance of different strategies to cope with missing data and show that imputing missing data with an iterative approach provides 3% point increment to identify fine-grained activities. We confirm findings from the literature that extracting features from multi-sensors achieves higher performance in comparison to using single-sensor features.
... Recently, researchers have used various devices such as wristbands, smartwatches, finger movement sensors, ear-based sensors, glasses or cameras to automatically detect eating action [4,8,9,47,48]. Due to privacy or environmental constraints, the usage of cameras is often not feasible in many scenarios, and thus we focus on the challenges associated with using non-visual sensors that typically include inertial movement units (IMU) and electromyography (EMG). ...
... (iii) Instant feedback is critical to bring about behavioral changes to eating patterns; however, most research works do not perform instant detection of eating action. Most recent works with accelerometer data propose detection after complete data collection [9,47,48], and the works that do propose instant eating action detection use other sensors such as video, data gloves, or finger movement sensors [16,24], which are not easy to utilize. ...
Article
Eating activity monitoring using wearable sensors can potentially enable interventions based on eating speed to mitigate the risks of critical healthcare problems such as obesity or diabetes. Eating actions are poly-componential gestures composed of sequential arrangements of three distinct components interspersed with gestures that may be unrelated to eating. This makes it extremely challenging to accurately identify eating actions. The primary reasons for the lack of acceptance of state-of-the-art eating action monitoring techniques include the following: (i) the need to install wearable sensors that are cumbersome to wear or limit the mobility of the user, (ii) the need for manual input from the user, and (iii) poor accuracy in the absence of manual inputs. In this work, we propose a novel methodology, IDEA, that performs accurate eating action identification within eating episodes with an average F1 score of 0.92. This is an improvement of 0.11 for precision and 0.15 for recall for the worst-case users as compared to the state of the art. IDEA uses only a single wristband and provides feedback on eating speed every 2 min without obtaining any manual input from the user.
... Monitoring the health of patients varied from the treatment of moderate to severe up. Treatment through video monitoring of health is covering the treatment of adults with advanced age, one's activities in daily activities, cooking and diet for patients with diabetes, treatment of chronic patients at home and care of patients with cardiovascular disease [3][4] [5][7] [8]. Of the various types of monitoring can be grouped into three consisting of elderly monitoring, monitoring of the patient's illness, and monitoring for healthy people. ...
... Cognitive status monitoring is through hand washing motion video [3]. Advance monitoring is to the patient's disease such as diabetes, chronic and cardiac [5][7] [8]. Patients who have this disease need serious handling care both as outpatients or inpatients at the hospital. ...
... In addition, Whitehouse et al. [48] used big data technology (surveillance cameras) to monitor kitchen operations, improving food safety and providing greater peace of mind for guests sitting at the table. Not only from the food point of view, AI and big data technologies can also advise diners to choose quality restaurants and help restaurant managers make the best and most rational decisions. ...
Article
Full-text available
Over the past few decades, the food industry has undergone revolutionary changes due to the impacts of globalization, technological advancements, and ever-evolving consumer demands. Artificial intelligence (AI) and big data have become pivotal in strengthening food safety, production, and marketing. With the continuous evolution of AI technology and big data analytics, the food industry is poised to embrace further changes and developmental opportunities. An increasing number of food enterprises will leverage AI and big data to enhance product quality, meet consumer needs, and propel the industry toward a more intelligent and sustainable future. This review delves into the applications of AI and big data in the food sector, examining their impacts on production, quality, safety, risk management, and consumer insights. Furthermore, the advent of Industry 4.0 applied to the food industry has brought to the fore technologies such as smart agriculture, robotic farming, drones, 3D printing, and digital twins; the food industry also faces challenges in smart production and sustainable development going forward. This review articulates the current state of AI and big data applications in the food industry, analyses the challenges encountered, and discusses viable solutions. Lastly, it outlines the future development trends in the food industry.
... They tested their recognition model in the SPHERE kitchen (Bristol, UK) using sensors of temperature, humidity, light levels, noise levels, dust levels, motion within the room, cupboard and room door state, water and electricity usage and also an RGB-D (depth sensing) camera in the room which is used to provide positional data of the occupants. Their model was able to recognize kitchen activities with an average accuracy of 80% when using specialized models, and with an average accuracy of 40% when using the general model [18]. A similar experiment took place in Japan during 2002, when a group of researchers tested the system in two different houses, over a period of one year: several sensors were installed, including infrared sensors to detect human movement, magnetic switches to detect the opening and closing of doors, watt metres embedded in wall sockets to detect the use of household appliances and a flame detector to identify the use of a cooking stove [12]. ...
... One of the barriers to providing benchmark datasets with cycle level information is the effort required to obtain and annotate them. Ontologies for daily activities, such as cooking [12,13], have been used simplify to the task when looking at non-cyclic data. Semi-supervised learning is also a common approach to reduce the labeling effort for activity level labels [14,15]. ...
Article
Full-text available
Activity monitoring using wearables is becoming ubiquitous, although accurate cycle level analysis, such as step-counting and gait analysis, are limited by a lack of realistic and labeled datasets. The effort required to obtain and annotate such datasets is massive, therefore we propose a smart annotation pipeline which reduces the number of events needing manual adjustment to 14%. For scenarios dominated by walking, this annotation effort is as low as 8%. The pipeline consists of three smart annotation approaches, namely edge detection of the pressure data, local cyclicity estimation, and iteratively trained hierarchical hidden Markov models. Using this pipeline, we have collected and labeled a dataset with over 150,000 labeled cycles, each with 2 phases, from 80 subjects, which we have made publicly available. The dataset consists of 12 different task-driven activities, 10 of which are cyclic. These activities include not only straight and steady-state motions, but also transitions, different ranges of bouts, and changing directions. Each participant wore 5 synchronized inertial measurement units (IMUs) on the wrists, shoes, and in a pocket, as well as pressure insoles and video. We believe that this dataset and smart annotation pipeline are a good basis for creating a benchmark dataset for validation of other semi- and unsupervised algorithms.
... It could also be used to detect falls and health-related anomalies, which could save lives [5,6]. Activity recognition is also important to be able to understand different aspects of human behavior and emotions [7,8]. ...
Article
Full-text available
Most activity classifiers focus on recognizing application-specific activities that are mostly performed in a scripted manner, where there is very little room for variation within the activity. These classifiers are mainly good at recognizing short scripted activities that are performed in a specific way. In reality, especially when considering daily activities, humans perform complex activities in a variety of ways. In this work, we aim to make activity recognition more practical by proposing a novel approach to recognize complex heterogeneous activities that could be performed in a wide variety of ways. We collect data from 15 subjects performing eight complex activities and test our approach while analyzing it from different aspects. The results show the validity of our approach. They also show how it performs better than the state-of-the-art approaches that tried to recognize the same activities in a more controlled environment.
... We plan to exploit this additional information in a more complex model that is able to reason about the objects in the environment and their manipulation through the user actions. In a previous work we proposed such a model and applied it to the annotation from the kitchen experiment [14]. We also used all available sensors to evaluate the performance of a Computational State Space Model (CSSM) [16] for the kitchen scenario. ...
Conference Paper
Smart home systems are becoming increasingly relevant with every passing year, but while the technology is more available than ever, other issues such as cost and intrusiveness are becoming more apparent. To this end, we consider the types of sensors which are most useful for fine-grained activity recognition in the kitchen in terms of cost, intrusiveness, durability and ease of installation. We install sensors into a conventional residence for testing, and propose a system which meets the design challenges such an environment presents. We show that cupboard door sensors produce useful data about access to certain non mechanical processes and items, while being cheap and simple. We also show that they positively impact the activity recognition performance of our model through their addition, while providing information that we can make use of in future studies.
Thesis
Full-text available
Cyclic motions such as walking, running or cycling are common to our daily lives. Thus, the analysis of these cycles has an important role to play within both the medical field, e.g. gait analysis, and the fitness domain, e.g. step counting and running analysis. For such applications, inertial sensors are ideal as they are mobile and unobtrusive. The aim of this thesis is to capture cyclic motion using inertial sensors and subsequently analyse them using machine learning techniques. A lack of realistic and annotated data currently limits the development and application of algorithms for inertial sensors under non-laboratory conditions. This is due to the effort required to both collect and label such data. The first contributions of this thesis propose novel methods to reduce annotation costs for realistic datasets, and in this manner enable the labelling of a large benchmark dataset. The applicability of the dataset is demonstrated by using it to propose and test a robust algorithm for simultaneous human activity recognition and cycle analysis. One of these methods for reducing annotation costs is then deployed to develop the first mobile gait analysis system for patients with a rare and heterogeneous disease, hereditary spastic paraplegia (HSP). Thus, machine learning algorithms which set the state-of-the-art for cycle analysis using inertial sensors were proposed and validated by this thesis. The outcomes of this thesis are beneficial in both the medical and fitness domains, enabling the development and use of algorithms trained and tested in realistic settings.
Article
Full-text available
The recognition of activities of daily living (ADLs) by home monitoring systems can be helpful in order to objectively assess the health-related living behaviour and functional ability of older adults. Many ADLs involve human interactions with household electrical appliances (HEAs) such as toasters and hair dryers. Advances in sensor technology have prompted the development of intelligent algorithms to recognise ADLs via inferential information provided from the use of HEAs. The use of robust unsupervised machine learning techniques with inexpensive and retrofittable sensors is an ongoing focus in the ADL recognition research. This paper presents a novel unsupervised activity recognition method for elderly people living alone. This approach exploits a fuzzy-based association rule-mining algorithm to identify the home occupant’s interactions with HEAs using a power sensor, retrofitted at the house electricity panel, and a few Kinect sensors deployed at various locations within the home. A set of fuzzy rules is learned automatically from unlabelled sensor data to map the occupant’s locations during ADLs to the power signatures of HEAs. The fuzzy rules are then used to classify ADLs in new sensor data. Evaluations in real-world settings in this study demonstrated the potential of using Kinect sensors in conjunction with a power meter for the recognition of ADLs. This method was found to be significantly more accurate than just using power consumption data. In addition, the evaluation results confirmed that, owing to the use of fuzzy logic, the proposed method tolerates real-life variations in ADLs where the feature values in new sensor data differ slightly from those in the learning patterns.
Article
Full-text available
There's a widely known need to revise current forms of healthcare provision. Of particular interest are sensing systems in the home, which have been central to several studies. This article presents an overview of this rapidly growing body of work, as well as the implications for machine learning, with an aim of uncovering the gap between the state of the art and the broad needs of healthcare services in ambient assisted living. Most approaches address specific healthcare concerns, which typically result in solutions that aren't able to support full-scale sensing and data analysis for a more generic healthcare service, but the approach in this article differs from seamlessly linking multimodel data-collecting infrastructure and data analytics together in an AAL platform. This article also outlines a multimodality sensor platform with heterogeneous network connectivity, which is under development in the sensor platform for healthcare in a residential environment (SPHERE) Interdisciplinary Research Collaboration (IRC).
Article
Full-text available
Background: Computational state space models (CSSMs) enable the knowledge-based construction of Bayesian filters for recognizing intentions and reconstructing activities of human protagonists in application domains such as smart environments, assisted living, or security. Computational, i. e., algorithmic, representations allow the construction of increasingly complex human behaviour models. However, the symbolic models used in CSSMs potentially suffer from combinatorial explosion, rendering inference intractable outside of the limited experimental settings investigated in present research. The objective of this study was to obtain data on the feasibility of CSSM-based inference in domains of realistic complexity. Methods: A typical instrumental activity of daily living was used as a trial scenario. As primary sensor modality, wearable inertial measurement units were employed. The results achievable by CSSM methods were evaluated by comparison with those obtained from established training-based methods (hidden Markov models, HMMs) using Wilcoxon signed rank tests. The influence of modeling factors on CSSM performance was analyzed via repeated measures analysis of variance. Results: The symbolic domain model was found to have more than 10(8) states, exceeding the complexity of models considered in previous research by at least three orders of magnitude. Nevertheless, if factors and procedures governing the inference process were suitably chosen, CSSMs outperformed HMMs. Specifically, inference methods used in previous studies (particle filters) were found to perform substantially inferior in comparison to a marginal filtering procedure. Conclusions: Our results suggest that the combinatorial explosion caused by rich CSSM models does not inevitably lead to intractable inference or inferior performance. This means that the potential benefits of CSSM models (knowledge-based model construction, model reusability, reduced need for training data) are available without performance penalty. However, our results also show that research on CSSMs needs to consider sufficiently complex domains in order to understand the effects of design decisions such as choice of heuristics or inference procedure on performance.
Article
Full-text available
Activity models play a critical role for activity recognition and assistance in ambient assisted living. Existing approaches to activity modeling suffer from a number of problems, e.g., cold-start, model reusability, and incompleteness. In an effort to address these problems, we introduce an ontology-based hybrid approach to activity modeling that combines domain knowledge based model specification and data-driven model learning. Central to the approach is an iterative process that begins with “seed” activity models created by ontological engineering. The “seed” models are deployed, and subsequently evolved through incremental activity discovery and model update. While our previous work has detailed ontological activity modeling and activity recognition, this paper focuses on the systematic hybrid approach and associated methods and inference rules for learning new activities and user activity profiles. The approach has been implemented in a feature-rich assistive living system. Analysis of the experiments conducted has been undertaken in an effort to test and evaluate the activity learning algorithms and associated mechanisms.
Article
Full-text available
Using supervised machine learning approaches to recognize human activities from on-body wearable accelerometers generally requires a large amount of labelled data. When ground truth information is not available, too expensive, time consuming or difficult to collect, one has to rely on unsupervised approaches. This paper presents a new unsupervised approach for human activity recognition from raw acceleration data measured using inertial wearable sensors. The proposed method is based upon joint segmentation of multidimensional time series using a Hidden Markov Model (HMM) in a multiple regression context. The model is learned in an unsupervised framework using the Expectation-Maximization (EM) algorithm where no activity labels are needed. The proposed method takes into account the sequential appearance of the data. It is therefore adapted for the temporal acceleration data to accurately detect the activities. It allows both segmentation and classification of the human activities. Experimental results are provided to demonstrate the efficiency of the proposed approach with respect to standard supervised and unsupervised classification approaches
Article
Full-text available
Proper nutrition offers one of the most effective and least costly ways to decrease the burden of many diseases and their associated risk factors, including obesity. Nutrition research holds the key to increasing our understanding of the causes of obesity and its related comorbidities and thus holds promise to markedly influence global health and economies. After outreach to 75 thought leaders, the American Society for Nutrition (ASN) convened a Working Group to identify the nutrition research needs whose advancement will have the greatest projected impact on the future health and well-being of global populations. ASN's Nutrition Research Needs focus on the following high priority areas: 1) variability in individual responses to diet and foods; 2) healthy growth, development, and reproduction; 3) health maintenance; 4) medical management; 5) nutrition-related behaviors; and 6) food supply/environment. ASN hopes the Nutrition Research Needs will prompt collaboration among scientists across all disciplines to advance this challenging research agenda given the high potential for translation and impact on public health. Furthermore, ASN hopes the findings from the Nutrition Research Needs will stimulate the development and adoption of new and innovative strategies that can be applied toward the prevention and treatment of nutrition-related diseases. The multidisciplinary nature of nutrition research requires stakeholders with differing areas of expertise to collaborate on multifaceted approaches to establish the evidence-based nutrition guidance and policies that will lead to better health for the global population. In addition to the identified research needs, ASN also identified 5 tools that are critical to the advancement of the Nutrition Research Needs: 1) omics, 2) bioinformatics, 3) databases, 4) biomarkers, and 5) cost-effectiveness analysis.
Article
Full-text available
Utilization of computer tools in linguistic research has gained importance with the maturation of media frameworks for the handling of digital audio and video. The increased use of these tools in gesture, sign language and multimodal interaction studies has led to stronger requirements on the flexibility, the efficiency and in particular the time accuracy of annotation tools. This paper describes the efforts made to make ELAN a tool that meets these requirements, with special attention to the developments in the area of time accuracy. In subsequent sections an overview will be given of other enhancements in the latest versions of ELAN, that make it a useful tool in multimodality research.
Chapter
Full-text available
Providing cognitive assistance to Alzheimer’s patients in smart homes is a field of research that receives a lot of attention lately. The recognition of the patient’s behavior when he carries out some activities in a smart home is primordial in order to give adequate assistance at the opportune moment. To address this challenging issue, we present a formal activity recognition framework based on possibility theory and description logics. We present initial results from an implementation of this recognition approach in a smart home laboratory.
Conference Paper
Full-text available
The variability of human behavior during plan execution poses a difficult challenge for human-robot teams. In this paper, we use the concepts of theory of mind to enable robots to account for two sources of human variability during team operation. When faced with an unexpected action by a human teammate, a robot uses a simulation analysis of different hypothetical cognitive models of the human to identify the most likely cause for the human's behavior. This allows the cognitive robot to account for variances due to both different knowledge and beliefs about the world, as well as different possible paths the human could take with a given set of knowledge and beliefs. An experiment showed that cognitive robots equipped with this functionality are viewed as both more natural and intelligent teammates, compared to both robots who either say nothing when presented with human variability, and robots who simply point out any discrepancies between the human's expected, and actual, behavior. Overall, this analysis leads to an effective, general approach for determining what thought process is leading to a human's actions.
Article
Full-text available
Smart homes provide support to cognitively impaired people (such as those suffering from Alzheimer’s disease) so that they can remain at home in an autonomous and safe way. Models of this impaired population should benefit the cognitive assistance’s efficiency and responsiveness. This paper presents a way to model and simulate the progression of dementia of the Alzheimer’s type by evaluating performance in the execution of an activity of daily living (ADL). This model satisfies three objectives: first, it models an activity of daily living; second, it simulates the progression of the dementia and the errors potentially made by people suffering from it, and, finally, it simulates the support needed by the impaired person. To develop this model, we chose the ACT-R cognitive architecture, which uses symbolic and subsymbolic representations. The simulated results of 100 people suffering from Alzheimer’s disease closely resemble the results obtained by 106 people on an occupational assessment (the Kitchen Task Assessment).
Conference Paper
IT based Healthcare platforms have been widely recognized by research communities and institutions as key players in the future of home-based health monitoring and care. Features like personalised care, continuous monitoring, and reduced costs are fostering the research and use of these technologies. In this paper, we describe the design and implementation of the video monitoring system of the SPHERE platform (Sensor Platform for Healthcare in a Residential Environment). SPHERE aims to develop a smart home platform based on low cost, non-medical sensors. We present a detailed description of the hardware and software infrastructure designed and tested in real life scenarios, with particular emphasis on the design considerations employed to foster collaboration, the real time and budget constraints, and mid-scale deployment plan of our case study.
Article
The article can be downloaded from https://mmis.informatik.uni-rostock.de/index.php?title=A_Process_for_Systematic_Development_of_Symbolic_Models_for_Activity_Recognition Several emerging approaches to activity recognition (AR) combine symbolic representation of user actions with probabilistic elements for reasoning under uncertainty. These approaches provide promising results in terms of recognition performance, coping with the uncertainty of observations, and model size explosion when complex problems are modelled. But experience has shown that it is not always intuitive to model even seemingly simple problems. To date, there are no guidelines for developing such models. To address this problem, in this work we present a development process for building symbolic models that is based on experience acquired so far as well as on existing engineering and data analysis workflows. The proposed process is a first attempt at providing structured guidelines and practices for designing, modelling, and evaluating human behaviour in the form of symbolic models for AR. As an illustration of the process, a simple example from the office domain was developed. The process was evaluated in a comparative study of an intuitive process and the proposed process. The results showed a significant improvement over the intuitive process. Furthermore, the study participants reported greater ease of use and perceived effectiveness when following the proposed process. To evaluate the applicability of the process to more complex AR problems, it was applied to a problem from the kitchen domain. The results showed that following the proposed process yielded an average accuracy of 78%. The developed model outperformed state-of-the-art methods applied to the same dataset in previous work, and it performed comparably to a symbolic model developed by a model expert without following the proposed development process.
Conference Paper
Context-aware activity recognition plays an important role in different types of assistive systems and the approaches with which the context information is represented is a topic of various current projects. Here we present a tool support for activity recognition using computational causal behaviour models that allow the combination of symbolic causal model representation and probabilistic infer- ence. The aim of the tool is to provide a flexible way of generating probabilistic inference engines from prior knowledge which reduces the need for collecting expensive training data.
Conference Paper
In this work, a system for recognizing activities in the home setting using a set of small and simple state-change sensors is introduced. The sensors are designed to be “tape on and forget” devices that can be quickly and ubiquitously installed in home environments. The proposed sensing system presents an alternative to sensors that are sometimes perceived as invasive, such as cameras and microphones. Unlike prior work, the system has been deployed in multiple residential environments with non-researcher occupants. Preliminary results on a small dataset show that it is possible to recognize activities of interest to medical professionals such as toileting, bathing, and grooming with detection accuracies ranging from 25% to 89% depending on the evaluation criteria used.
Conference Paper
Plan recognition is the problem of inferring the goals and plans of an agent from partial observations of her behavior. Recently, it has been shown that the problem can be formulated and solved using planners, reducing plan recognition to plan generation. In this work, we extend this model-based approach to plan recognition to the POMDP setting, where actions are stochastic and states are partially observable. The task is to infer a probability distribution over the possible goals of an agent whose behavior results from a POMDP model. The POMDP model is shared between agent and observer except for the true goal of the agent that is hidden to the observer. The observations are action sequences O that may contain gaps as some or even most of the actions done by the agent may not be observed. We show that the posterior goal distribution P(G|O) can be computed from the value function VG(b) over beliefs b generated by the POMDP planner for each possible goal G. Some extensions of the basic framework are discussed, and a number of experiments are reported.
Article
Researchers and medical practitioners have long sought the ability to continuously and automatically monitor patients beyond the confines of a doctor's office. We describe a smart home monitoring and analysis platform that facilitates the automatic gathering of rich databases of behavioral information in a manner that is transparent to the patient. Collected information will be automatically or manually analyzed and reported to the caregivers and may be interpreted for behavioral modification in the patient. Our health platform consists of five technology layers. The architecture is designed to be flexible, extensible, and transparent, to support plug-and-play operation of new devices and components, and to provide remote monitoring and programming opportunities. The smart home-based health platform technologies have been tested in two physical smart environments. Data that are collected in these implemented physical layers are processed and analyzed by our activity recognition and chewing classification algorithms. All of these components have yielded accurate analyses for subjects in the smart environment test beds. This work represents an important first step in the field of smart environment-based health monitoring and assistance. The architecture can be used to monitor the activity, diet, and exercise compliance of diabetes patients and evaluate the effects of alternative medicine and behavior regimens. We believe these technologies are essential for providing accessible, low-cost health assistance in an individual's own home and for providing the best possible quality of life for individuals with diabetes.
ELAN: a Professional Framework for Multimodality Research
  • P Wittenburg
  • H Brugman
  • A Russel
  • A Klassmann
  • H Sloetjes
P. Wittenburg, H. Brugman, A. Russel, A. Klassmann, and H. Sloetjes. ELAN: a Professional Framework for Multimodality Research. In Proceedings of LREC, 2006.