Conference Paper

K9-Blyzer - Towards Video-Based Automatic Analysis of Canine Behavior

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Automatic analysis of animal behavior has the potential to revolutionize the work of animal science and ACI researchers. Many tracking and behavior analysis systems exist for different species, such as birds, insects and mice, but behavior analysis in the canine domain still remains a challenging task. In this research-in-progress paper we describe K9-Blyzer (Canine Behavior Analyzer), which is a tool for automatic video analysis of canine behavior. We present preliminary results of automatic analysis of dog-robot interactions, point out some envisioned extensions of the tool and discuss the potential applications of the tool for the field of ACI.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... It is an emerging field encapsulating the use of modern methods from computer science and engineering to quantitatively measure animal behavior [9]. In this paper we use computational analysis in the context of clinical assessment of ADHD-like behavior as part of an ongoing study for developing a decision support system for behavioral clinicians (some first results already appeared in [10][11][12][13]). ...
... We used a self-developed software tool, K9-Blyzer (first presented in Amir et al. [10] and later extended in [11][12][13]) for analyzing the video footage recorded at the clinics and producing the data on the movement variables discussed below. The system takes as input raw video of a dog freely moving in a room and produces as output the dog's location (x,y) in each frame. ...
Article
Full-text available
Computational approaches were called for to address the challenges of more objective behavior assessment which would be less reliant on owner reports. This study aims to use computational analysis for investigating a hypothesis that dogs with ADHD-like (attention deficit hyperactivity disorder) behavior exhibit characteristic movement patterns directly observable during veterinary consultation. Behavioral consultations of 12 dogs medically treated due to ADHD-like behavior were recorded, as well as of a control group of 12 dogs with no reported behavioral problems. Computational analysis with a self-developed tool based on computer vision and machine learning was performed, analyzing 12 movement parameters that can be extracted from automatic dog tracking data. Significant differences in seven movement parameters were found, which led to the identification of three dimensions of movement patterns which may be instrumental for more objective assessment of ADHD-like behavior by clinicians, while being directly observable during consultation. These include (i) high speed, (ii) large coverage of space, and (iii) constant re-orientation in space. Computational tools used on video data collected during consultation have the potential to support quantifiable assessment of ADHD-like behavior informed by the identified dimensions.
... We used a self-developed software tool, K9-Blyzer (Amir, Zamansky, & van der Linden, 2017) for analyzing the video footage recorded at the clinics and producing the data on the movement variables listed in Table 2. Figure 4 shows some example frames where the tool identifies the dog; an example recording can also be found here. For more information on K9-Blyzer, see Amir et al. (2017) and the K9-Blyzer website. To the best of our knowledge, this is the first known use of automatic video analysis for canine movement analysis, and, as such, the accuracy of such analysis should be further studied. ...
... To the best of our knowledge, this is the first known use of automatic video analysis for canine movement analysis, and, as such, the accuracy of such analysis should be further studied. The accuracy of the detection of some behaviors has been evaluated in Amir et al. (2017). To ensure accuracy of measurement, automatic detection of the dogs in frames was manually validated using the BORIS ethological observation software (Friard & Gamba, 2016). ...
Article
Full-text available
Canine behavioral disorders, such as various forms of fear and anxiety, are a major threat for the well-being of dogs and their owners. They are also the main cause for dog abandonment and relinquishment to shelters. Timely diagnosis and treatment of such problems is a complex task, requiring extensive behavioral expertise. Accurate classification of pathological behavior requires information on the dog's reactions to environmental stimuli. Such information is typically self-reported by the animal's owner, posing a threat to its accuracy and correctness. Simple robots have been used as controllable stimuli for evoking particular canine behaviors, leading to the increasing interest in dog-robot interactions (DRIs). We explore the use of DRIs as a tool for the assessment of canine behavioral disorders. More concretely, we ask in what ways disorders such as anxiety may be reflected in the way dogs interact with a robot. To this end, we performed an exploratory study, recording DRIs for a group of 20 dogs, consisting of 10 dogs diagnosed by a behavioral expert veterinarian with deprivation syndrome, a form of phobia/anxiety caused by inadequate development conditions, and 10 healthy control dogs. Pathological dogs moved significantly less than the control group during these interactions.
... This requires an expert user to implement these systems. Several works, especially in the area of animal behaviour, have used just colour cameras to detect animals' shapes and track their movement without any posture detection, such as with pigs [115], dogs [113], chickens [11] or mice [106]. ...
... This requires an expert user to implement these systems. Several works, especially in the area of animal behaviour, have used just colour cameras to detect animals' shapes and track their movement without any posture detection, such as with pigs [115], dogs [113], chickens [11] or mice [106]. North et al. [108,109] have proposed, and are currently developing, a video-based automated behaviour identification software tool for observations of both horse-to-horse and horse-to-human interaction coined HABIT. ...
Article
Full-text available
As technologies diversify and become embedded in everyday lives, the technologies we expose to animals, and the new technologies being developed for animals within the field of Animal Computer Interaction (ACI) are increasing. As we approach seven years since the ACI manifesto, which grounded the field within Human Computer Interaction and Computer Science, this thematic literature review looks at the technologies developed for (non-human) animals. Technologies that are analysed include tangible and physical, haptic and wearable, olfactory, screen technology and tracking systems. The conversation explores what exactly ACI is whilst questioning what it means to be animal by considering the impact and loop between machine and animal interactivity. The findings of this review are expected to form the first grounding foundation of ACI technologies informing future research in animal computing as well as suggesting future areas for exploration.
... To compare the system's similarity to the manual recordings of behaviour, 15 random nights were selected to be evaluated by the system and a human observer. Behavioural observations were carried out using focal sampling with continuous recordings of behaviour [63]. Sleep duration was recorded at any time the animal was in a resting position, with eyes closed, and/or with no perceivable movement. ...
Article
Full-text available
Although direct behavioural observations are widely used, they are time-consuming, prone to error, require knowledge of the observed species, and depend on intra/inter-observer consistency. As a result, they pose challenges to the reliability and repeatability of studies. Automated video analysis is becoming popular for behavioural observations. Sleep is a biological metric that has the potential to become a reliable broad-spectrum metric that can indicate the quality of life and understanding sleep patterns can contribute to identifying and addressing potential welfare concerns, such as stress, discomfort, or health issues, thus promoting the overall welfare of animals; however, due to the laborious process of quantifying sleep patterns, it has been overlooked in animal welfare research. This study presents a system comparing convolutional neural networks (CNNs) with direct behavioural observation methods for the same data to detect and quantify dogs’ sleeping patterns. A total of 13,688 videos were used to develop and train the model to quantify sleep duration and sleep fragmentation in dogs. To evaluate its similarity to the direct behavioural observations made by a single human observer, 6000 previously unseen frames were used. The system successfully classified 5430 frames, scoring a similarity rate of 89% when compared to the manually recorded observations. There was no significant difference in the percentage of time observed between the system and the human observer (p > 0.05). However, a significant difference was found in total sleep time recorded, where the automated system captured more hours than the observer (p < 0.05). This highlights the potential of using a CNN-based system to study animal welfare and behaviour research.
... Although research concerned with animal behavior have so far lagged behind the human domain with respect to automation, recently, the field is beginning to catch up. This is owing in part to developments in animal motion tracking with the introduction of general platforms, such as DeepLabCut (Mathis et al., 2018), EZtrack (Pennington et al., 2019), Blyzer (Amir et al., 2017), LEAP (Pereira et al., 2019), DeepPoseKit (Graving et al., 2019) and idtracker.ai (Romero-Ferrero et al., 2019). ...
Article
Full-text available
Advances in animal motion tracking and pose recognition have been a game changer in the study of animal behavior. Recently, an increasing number of works go ‘deeper’ than tracking, and address automated recognition of animals’ internal states such as emotions and pain with the aim of improving animal welfare, making this a timely moment for a systematization of the field. This paper provides a comprehensive survey of computer vision-based research on recognition of pain and emotional states in animals, addressing both facial and bodily behavior analysis. We summarize the efforts that have been presented so far within this topic—classifying them across different dimensions, highlight challenges and research gaps, and provide best practice recommendations for advancing the field, and some future directions for research.
... Since our main goal is to automatically produce insights into patterns found in the data, what makes a behavioral parameter a good feature is measurability: e.g., the availability of a method for accurately measuring the feature values for each video is important. In our case study, all of the chosen features were derived from movement trajectories, that were automatically tracked using the BLYZER tool (29,(44)(45)(46). The tool gets as input videos of trials, automatically identifies dog in a frame, and produces its movement trajectory, in the form of time series data saved in a machine-readable data (JSON format). ...
Article
Full-text available
Traditional methods of data analysis in animal behavior research are usually based on measuring behavior by manually coding a set of chosen behavioral parameters, which is naturally prone to human bias and error, and is also a tedious labor-intensive task. Machine learning techniques are increasingly applied to support researchers in this field, mostly in a supervised manner: for tracking animals, detecting land marks or recognizing actions. Unsupervised methods are increasingly used, but are under-explored in the context of behavior studies and applied contexts such as behavioral testing of dogs. This study explores the potential of unsupervised approaches such as clustering for the automated discovery of patterns in data which have potential behavioral meaning. We aim to demonstrate that such patterns can be useful at exploratory stages of data analysis before forming specific hypotheses. To this end, we propose a concrete method for grouping video trials of behavioral testing of animal individuals into clusters using a set of potentially relevant features. Using an example of protocol for testing in a “Stranger Test”, we compare the discovered clusters against the C-BARQ owner-based questionnaire, which is commonly used for dog behavioral trait assessment, showing that our method separated well between dogs with higher C-BARQ scores for stranger fear, and those with lower scores. This demonstrates potential use of such clustering approach for exploration prior to hypothesis forming and testing in behavioral research.
... Although research concerned with non-human animal behavior have so far lagged behind the human domain with respect to automation, recently, the field is beginning to catch up. This is owing in part to developments in animal motion tracking with the introduction of general platforms, such as DeepLabCut [31], EZtrack [32], Blyzer [33], LEAP [34], DeepPoseKit [35] and idtracker.ai [36]. ...
Preprint
Full-text available
Advances in animal motion tracking and pose recognition have been a game changer in the study of animal behavior. Recently, an increasing number of works go 'deeper' than tracking, and address automated recognition of animals' internal states such as emotions and pain with the aim of improving animal welfare, making this a timely moment for a systematization of the field. This paper provides a comprehensive survey of computer vision-based research on recognition of affective states and pain in animals, addressing both facial and bodily behavior analysis. We summarize the efforts that have been presented so far within this topic -- classifying them across different dimensions, highlight challenges and research gaps, and provide best practice recommendations for advancing the field, and some future directions for research.
... Behavioral assessment and analysis are carried out in various ways, including dog owner questionnaires, expert evaluation, standardized measures, and observational studies [13]. However, because sometimes it is possible that some details are missed or that the human does not detect some variation, it is necessary to generate or create tools that support the evaluation of the dogs' behavior [3,2]. Since a computer can analyze a whole day of dog activities and for a human it is complicated, it would be valuable to capture those details and report them to the experts, who will make a complete decision since they will have more information regarding the behavior of the dogs. ...
Article
Full-text available
Dogs are the most common companion animals worldwide, motivated by their exceptional social behavior with humans. Unlike many animals, dog learn vocal commands, identify moods, maintain eye contact, and recognize facial expressions. Besides, dogs have great agility and senses of smell and hearing superior to humans, so dogs have been trained for crucial tasks like search, rescue, and assistance. Therefore, it is relevant to do scientific research to understand the fundamentals of behavior and communication that increase the use of its capabilities for the benefit of the human being, guaranteeing the animal's welfare. In this work, a computational method for analyzing dog behavior based on artificial vision techniques was developed. A video database recorded in positive and negative stimuli that induced different emotional states was used. The proposed method determines the dog's emotional state at a given time, which opens a promising field to develop new technologies that trainers and users can take advantage of, to improve the processes of selection, training, and execution of the tasks of working dogs. Using the proposed method, the best test accuracy value we obtained was 0.6917 on the best model trained using transfer learning over the architecture MobileNet, getting good but not perfect results. The training process was carried out using 1067 images distributed among four categories, aggressiveness, anxiety, fear and neutral. The proposed method obtained acceptable results but can still be improved in technical and methodology terms. However, this method can be used as a baseline for exploring and expanding the canine behavior study using computational models.
... Blyzer [2,17] is another framework based on deep learning neural networks for analyzing movement parameters of an animal video footage. It currently focuses on dog behavior analysis, and its underlying models have already been trained on multiple datasets collected in veterinary clinics and dog behavioral testing experiments (cf. ...
Article
Full-text available
Computational animal behavior analysis (CABA) is an emerging field which aims to apply AI techniques to support animal behavior analysis. The need for computational approaches which facilitate ‘objectivization’ and quantification of behavioral characteristics of animals is widely acknowledged within several animal-related scientific disciplines. State-of-the-art CABA approaches mainly apply machine learning (ML) techniques, combining it with approaches from computer vision and IoT. In this paper we highlight the potential applications of integrating knowledge representation approaches in the context of ML-based CABA systems, demonstrating the ideas using insights from an ongoing CABA project.
... Various methods have been presented aiming to gather these data by processing the raw data from video cameras and other sensors (e.g. [1], [2], [3], [6], [9]). Although these methods seem promising, thorough testing in real-life environments were not yet performed, also some methods show low accuracy rates, or have special needs or other limitations. ...
... Our approach takes a different strategy, using the simplest web or security cameras footage, and paying a "computational" price instead for the system's learning. It is a part of our ongoing multi-disciplinary project for automatic analysis of dog behavior, based on video footage obtained from simple cameras (an overview of the project can be found in [14]; preliminary ideas were presented in [15]). In this paper we present a system developed for supporting an ongoing research project in animal science, investigating sleeping patterns of kennelled dogs as indicators of their welfare. ...
Chapter
Full-text available
Video-based analysis is one of the most important tools of animal behavior and animal welfare scientists. While automatic analysis systems exist for many species, this problem has not yet been adequately addressed for one of the most studied species in animal science—dogs. In this paper we describe a system developed for analyzing sleeping patterns of kenneled dogs, which may serve as indicator of their welfare. The system combines convolutional neural networks with classical data processing methods, and works with very low quality video from cameras installed in dogs shelters.
... The video recording, together with diagnostic data concerning the dog, obtained both from owner report and physical examination of the dog, is stored in the VetDB database. K9-Blyzer, a tool for automatic tracking of dog movement, is then run on the video footage to extract spatio-temporal features, such as dog's location, speed, distance to the owner, etc. (see [2] for a detailed description of the K9-Blyzer tool). Fig. 2 shows a dog during consultation tracked by K9-Blyzer. ...
Conference Paper
In this paper I describe my research agenda as part of an ongoing research project for developing a video-based decision support system for behavioral veterinarians. In their daily practice, such veterinarians diagnose and treat behavioral problems of companion dogs. Since problems such as hyperactivity, anxiety and aggression can greatly compromise the well-being of dogs and their owners, their early diagnosis and treatment may be critical. In their diagnosis, the vets often rely on owner reports and some assessment scales, which are subject to subjectivity. In my research, I investigate ways to provide more objective automatic tools for supporting an assessment of dog behavioral problems, focusing specifically on the problem of canine ADHD as a case study.
... Similar projects focus on monitoring dogs' caloric input/outputs, as well as exercise and movement habits [41]. Other topics concern identifying physical behaviors such as chewing barking, pawing, and sniffing with collar-worn sensors [25], or computer vision [1]. Consumer oriented activity trackers directed towards companion dogs (e.g. ...
Conference Paper
Full-text available
We present a qualitative content analysis of visual-verbal social media posts, where ordinary dog owners pretend to be their canine, to identify meaningful facets in their dogs' life-worlds, e.g. pleasures of human-dog relation, dog-dog relations, food etc. We use this knowledge to inform design of "quantified pets". The study targets a general problem in Animal-Computer Interaction (ACI), i.e. to understand animals when designing "for" them, although lacking a common language. Several approaches, e.g. ethnography and participatory design, have been appropriated from HCI without exhausting the issue. We argue for a methodological creativity and pluralism by suggesting an additional approach drawing on "kinesthetic empathy". It implies to understand animals by empathizing with their bodily movements over time and decoding the realities of their life-worlds. This, and other related approaches, has inspired animal researchers to conduct more or less radical participant observations during extensive duration to understand the perspective of the other. We suggest that dog owners whom share their lives with their dogs already possess a similar understanding as these experts, and thus uphold important experiences of canine life that could be used to understand individual dogs and inspire design.
Conference Paper
How do Animal-Computer Interaction (ACI) researchers working with live animal participants assess the animals’ willingness to participate in their research? In this paper we present the results of a systematic literature review designed to answer this question by examining the Proceedings of the ACM International Animal Computer Interaction Conference. From 2016-2022, these proceedings included 38 full papers that reported results from research with live animal participants. We found 1) only 74% or 28/38 of the papers reported how they assessed animal participants’ willingness to engage during their research, 2) the authors of papers focused on species other than dogs had a much higher rate of providing this information than did the authors of dog-based studies (100% or 12/12 non-dog papers v 62% or 16/26 of dog-based papers), 3) most researchers who addressed the issue of an animal participant’s willingness to engage in the research relied on some form of mediated consent, informed by behavioral observation methods, to do so. However, the researchers focused on non-dog species were much more likely than researchers focused on dogs to include elements of contingent consent in their protocols (75% (9/12) of the non-dog studies v 12% (3/26) of the dog-related studies). We argue that providing each other with more details about our research methods and possibly more fully embracing the principles of contingent consent would further ACI researchers’ existing ethical commitment to our animal participants, increase our adherence to standard scientific research practice, and accelerate the continued development of the field of Animal-Computer Interaction.
Article
Poultry tracking is primarily used for evaluating abnormal behaviour and predicting disease in poultry. Offline video is often used to track and record poultry behaviour. However, poultry are group-housed animals. The difficulty of accurately monitoring large-scale poultry farms lies in the automatic tracking of individual poultry. To this end, this paper demonstrates the use of a deep regression network to track single poultry based on computer vision technology. By referring to the Alexnet network, the broiler chicken area of the previous frame and the search area of the next frame were input into the convolutional layer respectively, and the coordinates of the prediction area were obtained by full-connection layer regression. The method was compared with some existing tracking algorithms. Preliminary tests revealed that when compared with MeanShift Algorithm (MS), Multitask learning Algorithm (MIL), Kernel Correlation Filter (KCF), Adaptive Correlation Filters (ACF) and tracking-learn-detection (TLD), the poultry tracking algorithm named TBroiler tracker proposed in this paper has better performance on the overlap ratio, pixel error and the failure rate. TBroiler achieved a mixed tracking performance evaluation (MTPE) of 0.730. The evaluation scores of other methods were 0.362 (MS), 0.355 (MIL), 0.434 (KCF), 0.051 (ACF), and 0.248 (TLD). In addition, the method can be further optimised to improve the overall success rate of verification.
Conference Paper
Full-text available
Animal-Computer Interaction (ACI) is a new and quickly developing discipline, which is closely related to HCI and is making reference to some of its theoretical frameworks and research methodologies. The first edition of the Workshop on Research Methods in ACI (RM4ACI) was co-located with the Third International Conference on Animal-Computer Interaction, which took place in Milton-Keynes, UK in November 2016. This paper presents an overview of the workshop, including insights from discussions on some of the challenges faced by the ACI community as it works to develop ACI as a discipline, and on important opportunities for cross-fertilization between HCI and ACI that the HCI community could consider.
Conference Paper
Full-text available
This research presents a preliminary study conducted on a cat fitted with biotelemetry devices. The aim was to explore the feline's wearability experience of bearing off-the-shelf products. The cat's reactions to the device presence were recorded and findings suggest the need for a design approach centred on the wearer. A wearer-centred framework to inform the design of biotelemetry interventions for animals is then proposed.
Article
Full-text available
The nature of mental representation of others plays a crucial role in social interactions. Dogs present an ideal model species for the investigation of such mental representations because they develop social ties with both conspecifics and heterospecifics. Former studies found that dogs' preference for larger food quantity could be reversed by humans who indicate the smaller quantity. The question is whether this social bias is restricted to human partners. We suggest that after a short positive social experience, an unfamiliar moving inanimate agent (UMO) can also change dogs' choice between two food quantities. We tested four groups of dogs with different partners: In the (1) Helper UMO and (2) Helper UMO Control groups the partner was an interactive remote control car that helped the dog to obtain an otherwise unreachable food. In the (3) Non-helper UMO and (4) Human partner groups dogs had restricted interaction with the remote control car and the unfamiliar human partners. In the Human partner, Helper UMO and Helper UMO Control groups the partners were able to revert dogs' choice for the small amount by indicating the small one, but the Non-helper UMO was not. We suggest that dogs are able to generalize their wide range of experiences with humans to another type of agent as well, based on the recognition of similarities in simple behavioural patterns.
Article
Full-text available
Monitoring and describing the physical movements and body postures of animals is one of the most fundamental tasks of ethology. The more precise the observations are the more sophisticated the interpretations can be about the biology of a certain individual or species. Animal-borne data loggers have recently contributed much to the collection of motion-data from individuals, however, the problem of translating these measurements to distinct behavioural categories to create an ethogram is not overcome yet. The objective of the present study was to develop a "behaviour tracker": a system composed of a multiple sensor data-logger device (with a tri-axial accelerometer and a tri-axial gyroscope) and a supervised learning algorithm as means of automated identification of the behaviour of freely moving dogs. We collected parallel sensor measurements and video recordings of each of our subjects (Belgian Malinois, N=12; Labrador Retrievers, N=12) that were guided through a predetermined series of standard activities. Seven behavioural categories (lay, sit, stand, walk, trot, gallop, canter) were pre-defined and each video recording was tagged accordingly. Evaluation of the measurements was performed by support vector machine (SVM) classification. During the analysis we used different combinations of independent measurements for training and validation (belonging to the same or different individuals or using different training data size) to determine the robustness of the application. We reached an overall accuracy of above 90% perfect identification of all the defined seven categories of behaviour when both training and validation data belonged to the same individual, and over 80% perfect recognition rate using a generalized training data set of multiple subjects. Our results indicate that the present method provides a good model for an easily applicable, fast, automatic behaviour classification system that can be trained with arbitrary motion patterns and potentially be applied to a wide range of species and situations.
Article
Full-text available
Robots offer new possibilities for investigating animal social behaviour. This method enhances controllability and reproducibility of experimental techniques, and it allows also the experimental separation of the effects of bodily appearance (embodiment) and behaviour. In the present study we examined dogs' interactive behaviour in a problem solving task (in which the dog has no access to the food) with three different social partners, two of which were robots and the third a human behaving in a robot-like manner. The Mechanical UMO (Unidentified Moving Object) and the Mechanical Human differed only in their embodiment, but showed similar behaviour toward the dog. In contrast, the Social UMO was interactive, showed contingent responsiveness and goal-directed behaviour and moved along varied routes. The dogs showed shorter looking and touching duration, but increased gaze alternation toward the Mechanical Human than to the Mechanical UMO. This suggests that dogs' interactive behaviour may have been affected by previous experience with typical humans. We found that dogs also looked longer and showed more gaze alternations between the food and the Social UMO compared to the Mechanical UMO. These results suggest that dogs form expectations about an unfamiliar moving object within a short period of time and they recognise some social aspects of UMOs' behaviour. This is the first evidence that interactive behaviour of a robot is important for evoking dogs' social responsiveness.
Article
Full-text available
The need for automating behavioural observations and the evolution of systems developed for that purpose are outlined. Automatic video tracking systems enable behaviour to be studied in a reliable and consistent way, and over longer time periods than if it is manually recorded. To overcome limitations of currently available systems and to meet researchers' needs as these have been identified, we have developed an integrated system (EthoVision) for automatic recording of activity, movement and interactions of insects. The system is described here, with special emphasis on file management, experiment design, arena and zone definition, object detection, experiment control, visualisation of tracks and calculation of analysis parameters. A review of studies using our system is presented, to demonstrate its use in a variety of entomological applications. This includes research on beetles, fruit flies, soil insects, parasitic wasps, predatory mites, ticks, and spiders. Finally, possible future directions for development are discussed.
Article
Full-text available
A newly developed behaviour registration system, Laboratory Animal Behaviour Observation, Registration and Analysis System (LABORAS) for the automatic registration of different behavioural elements of mice and rats was validated. The LABORAS sensor platform records vibrations evoked by animal movements and the LABORAS software translates these into the corresponding behaviours. Data obtained by using LABORAS were compared with data from conventional observation methods (observations of videotapes by human observers). The results indicate that LABORAS is a reliable system for the automated registration of eating, drinking, grooming, climbing, resting and locomotion of mice during a prolonged period of time. In rats, grooming, locomotion and resting also met the pre-defined validation criteria. The system can reduce observation labour and time considerably.
Article
Full-text available
Video tracking systems enable behavior to be studied in a reliable and consistent way, and over longer time periods than if they are manually recorded. The system takes an analog video signal, digitizes each frame, and analyses the resultant pixels to determine the location of the tracked animals (as well as other data). Calculations are performed on a series of frames to derive a set of quantitative descriptors of the animal's movement. EthoVision (from Noldus Information Technology) is a specific example of such a system, and its functionality that is particularly relevant to transgenic mice studies is described. Key practical aspects of using the EthoVision system are also outlined, including tips about lighting, marking animals, the arena size, and sample rate. Four case studies are presented, illustrating various aspects of the system: (1) The effects of disabling the Munc 18-1 gene were clearly shown using the straightforward measure of how long the mice took to enter a zone in an open field. (2) Differences in exploratory behavior between short and long attack latency mice strains were quantified by measuring the time spent in inner and outer zones of an open field. (3) Mice with hypomorphic CREB alleles were shown to perform less well in a water maze, but this was only clear when a range of different variables were calculated from their tracks. (4) Mice with the trkB receptor knocked out in the forebrain also performed poorly in a water maze, and it was immediately apparent from examining plots of the tracks that this was due to thigmotaxis. Some of the latest technological developments and possible future directions for video tracking systems are briefly discussed.
Article
Full-text available
An algorithm that categorises animal locomotive behaviour by combining detection and tracking of animal faces in wildlife videos is presented. As an example, the algorithm is applied to lion faces. The detection algorithm is based on a human face detection method, utilising Haar-like features and AdaBoost classifiers. The face tracking is implemented by applying a specific interest model that combines low-level feature tracking with the detection algorithm. By combining the two methods in a specific tracking model, reliable and temporally coherent detection/tracking of animal faces is achieved. The information generated by the tracker is used to automatically annotate the animal's locomotive behaviour. The annotation classes of locomotive processes for a given animal species are predefined by a large semantic taxonomy on wildlife domain. The experimental results are presented.
Article
Full-text available
Real-time segmentation of moving regions in image sequences is a fundamental step in many vision systems including automated visual surveillance, human-machine interface, and very low-bandwidth telecommunications. A typical method is background subtraction. Many background models have been introduced to deal with different problems. One of the successful solutions to these problems is to use a multi-colour background model per pixel proposed by Grimson et al [1,2,3]. However, the method suffers from slow learning at the beginning, especially in busy environments. In addition, it can not distinguish between moving shadows and moving objects. This paper presents a method which improves this adaptive background mixture model. By reinvestigating the update equations, we utilise different equations at different phases. This allows our system learn faster and more accurately as well as adapt effectively to changing environments. A shadow detection scheme is also introduced in this paper. It is based on a computational colour space that makes use of our background model. A comparison has been made between the two algorithms. The results show the speed of learning and the accuracy of the model using our update algorithm over the Grimson et al's tracker. When incorporate with the shadow detection, our method results in far better segmentation than that of Grimson et al.
Conference Paper
As drones are quickly becoming part of our everyday lives, dogs become inevitably exposed to them. Moreover, dog-drone interactions have far-reaching applications in search and rescue operations and other domains. This short note calls for taking an ACI, user-centric perspective on dog-drone interaction, informing the design of interactions which are safe, stress-free and enriching for our canine companions.
Conference Paper
3D sensing hardware, such as the Microsoft Kinect, allows new interaction paradigms that would be difficult to accomplish with traditional RGB cameras alone. One basic step in realizing these new methods of animal-computer interaction is posture and behavior detection and classification. In this paper, we present a system capable of identifying static postures for canines that does not rely on hand-labeled data at any point during the process. We create a model of the canine based on measurements automatically obtained in from the first few captured frames, reducing the burden on users. We also present a preliminary evaluation of the system with five dogs, which shows that the system can identify the "standing," "sitting," and "lying" postures with approximately 70%, 69%, and 94% accuracy, respectively.
Article
The relative contribution of evolutionary and ontogenetic mechanisms to the emergence of communicative signals in social interactions is one of the central questions in social cognition. Most previously used methods utilized the presentation of a novel signal or a novel context to test effects of predisposition and/or experience. However, all share the common problem that the familiar social partners used in the test context as actors carry over a variety of contextual information from previous interactions with the subjects. In the present study we utilized a novel method for separating the familiar actor from the action. We tested whether dogs behave in a socially competent way towards an unidentified moving object (UMO) in a communicative situation after interacting with it in a different context. We found that dogs were able to find hidden food based on the approach behaviour of the UMO only if they obtained previous experience with it in a different context. In contrast no such prior experience was needed in the case of an unfamiliar human partner. These results suggest that dogs' social behaviour is flexible enough to generalize from previous communicative interactions with humans to a novel unfamiliar partner, and this inference may be based on the dogs' well-developed social competence. The rapid adjustment to the new context and continued high performance suggest that evolutionary ritualization also facilitates the recognition of potentially communicative actions.
Conference Paper
Training and handling working dogs is a costly process and requires specialized skills and techniques. Less subjective and lower-cost training techniques would not only improve our partnership with these dogs but also enable us to benefit from their skills more efficiently. To facilitate this, we are developing a canine body-area-network (cBAN) to combine sensing technologies and computational modeling to provide handlers with a more accurate interpretation for dog training. As the first step of this, we used inertial measurement units (IMU) to remotely detect the behavioral activity of canines. Decision tree classifiers and Hidden Markov Models were used to detect static postures (sitting, standing, lying down, standing on two legs and eating off the ground) and dynamic activities (walking, climbing stairs and walking down a ramp) based on the heuristic features of the accelerometer and gyroscope data provided by the wireless sensing system deployed on a canine vest. Data was collected from 6 Labrador Retrievers and a Kai Ken. The analysis of IMU location and orientation helped to achieve high classification accuracies for static and dynamic activity recognition.
Article
Changes in regulations for livestock animals will in the near future call for loose-house pig breeding systems. These new systems will increase the workload for the farmers, as location and identification of animals will require more time than before. This paper presents a real-time computer vision system for tracking of pigs in loose-housed stables. The system will ease the workload for farmers in identification and locating individual animals. The system consists of a camera and a PC. The PC runs a tracking-algorithm, estimating the positions and identities of the pigs. The tracking algorithm operates in 2 steps. The first step builds up support maps, pointing to preliminary pig segments in each video frame. In the second step the support map segments are used to build up a 5D-Gaussian model of the individual pigs (i.e. position and shape). The system has software correction for fisheye distortion coming from the camera lens. The fisheye lens allows the camera to monitor a much larger area in the stable. The algorithms are developed in MatLab, implemented in C and runs in real-time. Experiments in the lab and in the stable demonstrate the robustness of the system. The system can track at least 3 pigs over a longer time span (more than 8min) without loosing track and identity of the individual pigs in a realistic experiment.
Article
Previous work has shown how a trainable flexible model (a point distribution model) can be used to locate pigs in images. This paper extends the idea to tracking animal movements through sequences of images where a single pig is viewed from above. As well as position and rotation, more subtle motion such as bending and head nodding can be modelled. This type of model based tracking could be used to characterise animal behaviour over time. The technique was used on seven sequences and worked well in most cases. However, it is possible to lose lock, as happened in one sequence, and the method currently reported cannot restart tracking. Further developments are required to investigate model fitting methods and high level control over the fitting and tracking process.
Article
We introduce the wide area of problems that exist in the domain of animal ethology. We then develop a technique to solve one such problem using aspects of computer vision. The method processes a video sequence of broiler chickens in a group-housing pen. A new technique has been developed for extracting a background image, which compensates for the particular idiosyncrasies inherent to the problem. A multi-faceted approach to tracking has been demonstrated, and preliminary examination shows the approach to be robust.
Article
Two border following algorithms are proposed for the topological analysis of digitized binary images. The first one determines the surroundness relations among the borders of a binary image. Since the outer borders and the hole borders have a one-to-one correspondence to the connected components of 1-pixels and to the holes, respectively, the proposed algorithm yields a representation of a binary image, from which one can extract some sort of features without reconstructing the image. The second algorithm, which is a modified version of the first, follows only the outermost borders (i.e., the outer borders which are not surrounded by holes). These algorithms can be effectively used in component counting, shrinking, and topological structural analysis of binary images, when a sequential digital computer is used.
Article
In 1960, R.E. Kalman published his famous paper describing a recursive solution to the discrete-data linear filtering problem. Since that time, due in large part to advances in digital computing, the Kalman filter has been the subject of extensive research and application, particularly in the area of autonomous or assisted navigation. The Kalman filter is a set of mathematical equations that provides an efficient computational (recursive) means to estimate the state of a process, in a way that minimizes the mean of the squared error. The filter is very powerful in several aspects: it supports estimations of past, present, and even future states, and it can do so even when the precise nature of the modeled system is unknown. The purpose of this paper is to provide a practical introduction to the discrete Kalman filter. This introduction includes a description and some discussion of the basic discrete Kalman filter, a derivation, description and some discussion of the extended Kalman filter, and a relatively simple (tangible) example with real numbers & results.