ArticlePDF Available

Abstract and Figures

Simple Summary In the process of calf rearing, it is inevitable to encounter issues of illness and death among calves. Often, due to the inability to detect sicknesses such as diarrhoea in a timely fashion, these sicknesses lead to the calves’ demise. This research starts from the practical application needs, and proposes the development of a monitoring system using deep learning technology to monitor the daily standing and lying behaviour of calves to predict their condition and adaptation to the environment. By analysing the standing and lying time of calves, the system can provide early warnings about calves’ condition and health status. This research helps to promptly grasp calves’ condition and growth status, thereby improving their welfare and management, enhancing the health condition of reared calves, ensuring the quality and safety of meat and milk, and reducing production costs. This research method also offers a new idea for the construction of smart ranches, as the construction of precision and smart ranches is not only a demand of consumers but also an inevitable direction for the development of the breeding industry. Abstract Standing and lying are the fundamental behaviours of quadrupedal animals, and the ratio of their durations is a significant indicator of calf health. In this study, we proposed a computer vision method for non-invasively monitoring of calves’ behaviours. Cameras were deployed at four viewpoints to monitor six calves on six consecutive days. YOLOv8n was trained to detect standing and lying calves. Daily behavioural budget was then summarised and analysed based on automatic inference on untrained data. The results show a mean average precision of 0.995 and an average inference speed of 333 frames per second. The maximum error in the estimated daily standing and lying time for a total of 8 calf-days is less than 14 min. Calves with diarrhoea had about 2 h more daily lying time (p < 0.002), 2.65 more daily lying bouts (p < 0.049), and 4.3 min less daily lying bout duration (p = 0.5) compared to healthy calves. The proposed method can help in understanding calves’ health status based on automatically measured standing and lying time, thereby improving their welfare and management on the farm.
This content is subject to copyright.
Citation: Zhang, W.; Wang, Y.; Guo,
L.; Falzon, G.; Kwan, P.; Jin, Z.; Li, Y.;
Wang, W. Analysis and Comparison
of New-Born Calf Standing and Lying
Time Based on Deep Learning.
Animals 2024,14, 1324. https://
doi.org/10.3390/ani14091324
Academic Editors: María Dolores
Fernández Rodríguez and Manuel
Ramiro Rodríguez Rodríguez
Received: 12 April 2024
Revised: 25 April 2024
Accepted: 26 April 2024
Published: 29 April 2024
Copyright: © 2024 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
animals
Article
Analysis and Comparison of New-Born Calf Standing and Lying
Time Based on Deep Learning
Wenju Zhang 1, Yaowu Wang 1,2 ,* , Leifeng Guo 1, Greg Falzon 3,4 , Paul Kwan 5, Zhongming Jin 1,
Yongfeng Li 1and Wensheng Wang 1,*
1Agricultural Information Institute, Chinese Academy of Agriculture Sciences, Beijing 100086, China;
zhangwenju@caas.cn (W.Z.); guoleifeng@caas.cn (L.G.); jinzhongming@caas.cn (Z.J.);
liyongfeng_1116@163.com (Y.L.)
2Laboratory of Geo-Information Science and Remote Sensing, Wageningen University & Research,
6708 PB Wageningen, The Netherlands
3College of Science and Engineering, Flinders University, Adelaide, SA 5042, Australia;
greg.falzon@flinders.edu.au
4School of Science and Technology, University of New England, Armidale, NSW 2351, Australia
5School of Engineering and Technology, College of ICT, Central Queensland University,
Rockhampton, QLD 4701, Australia; w.kwan@cqu.edu.au
*Correspondence: yaowu.wang@wur.nl (Y.W.); wangwensheng@caas.cn (W.W.)
Simple Summary: In the process of calf rearing, it is inevitable to encounter issues of illness and
death among calves. Often, due to the inability to detect sicknesses such as diarrhoea in a timely
fashion, these sicknesses lead to the calves’ demise. This research starts from the practical application
needs, and proposes the development of a monitoring system using deep learning technology to
monitor the daily standing and lying behaviour of calves to predict their condition and adaptation
to the environment. By analysing the standing and lying time of calves, the system can provide
early warnings about calves’ condition and health status. This research helps to promptly grasp
calves’ condition and growth status, thereby improving their welfare and management, enhancing
the health condition of reared calves, ensuring the quality and safety of meat and milk, and reducing
production costs. This research method also offers a new idea for the construction of smart ranches,
as the construction of precision and smart ranches is not only a demand of consumers but also an
inevitable direction for the development of the breeding industry.
Abstract: Standing and lying are the fundamental behaviours of quadrupedal animals, and the ratio
of their durations is a significant indicator of calf health. In this study, we proposed a computer
vision method for non-invasively monitoring of calves’ behaviours. Cameras were deployed at four
viewpoints to monitor six calves on six consecutive days. YOLOv8n was trained to detect standing
and lying calves. Daily behavioural budget was then summarised and analysed based on automatic
inference on untrained data. The results show a mean average precision of 0.995 and an average
inference speed of 333 frames per second. The maximum error in the estimated daily standing and
lying time for a total of 8 calf-days is less than 14 min. Calves with diarrhoea had about 2 h more
daily lying time (p< 0.002), 2.65 more daily lying bouts (p< 0.049), and 4.3 min less daily lying bout
duration (p= 0.5) compared to healthy calves. The proposed method can help in understanding
calves’ health status based on automatically measured standing and lying time, thereby improving
their welfare and management on the farm.
Keywords: behaviour monitoring; animal welfare; deep learning; health indicator
1. Introduction
Along with an increasing quality of life, concerns about milk quality have increased the
interest in cows’ physical and emotional welfare during breeding [
1
]. A potential approach
Animals 2024,14, 1324. https://doi.org/10.3390/ani14091324 https://www.mdpi.com/journal/animals
Animals 2024,14, 1324 2 of 15
to assess cows’ welfare status is to monitor their behaviours. Apart from observing their
abnormal behaviours such as fighting and aggression, monitoring their normal animal
behaviours like eating, standing, walking, and lying is becoming popular, and could be
used to analyse the physical condition of cows and the impact of the external environment
on them, therefore monitoring their health, emotions, and well-being [2].
Monitoring cows’ behaviour, including standing and lying, has undergone several
phases. It began with traditional manual observation and then progressed to the use of
electronic accessories. For example, Chapinal et al. used pedometers to monitor standing
and lying behaviours [
3
]; Jaeger et al. used electronic ear tags to monitor cow lying
behaviour [
4
]; and Khanh et al. used 3-axis acceleration to classify lying, standing, and
feeding behaviours [
5
]. However, these accessories, which must be attached to cows, are
invasive and could not only be missed or damaged but could also cause injury and stress
to the cows [6].
Recently, cows’ standing and lying behaviour was non-invasively monitored through
computer vision. Researchers have used deep learning techniques to detect individual
cows’ standing and lying behaviour. The behaviour recognition of cows by computer
vision technology is not only conducive to the good management of cows on dairy farms,
but also more conducive to the welfare of cows due to its non-invasive characteristics.
Fuentes et al. used deep learning networks to detect the standing and lying behaviour of
cows and compared the precision of different algorithms [
7
]. Computer vision algorithms
can also be used to monitor the respiratory behaviour of multiple cows [
8
,
9
], as well as to
identify and track the movements of dairy cows [
10
]. However, studies are relatively insuf-
ficient for new-born calves, whose growth significantly affects subsequent milk production
quality. To monitor calf health conditions, standing and lying behaviour are important
indicators [
11
13
]. Goharshahi et al. used accelerometer ear tags to monitor diarrhoea in
female calves and found that calves with diarrhoea spent an average of 64.8 min longer
lying down than healthy calves on the day before clinical identification [
14
]. Until now, few
studies have researched automated monitoring and statistical analysis of calf behaviour’s
duration using deep learning technology. Therefore, the research problem is to explore a
method that uses deep learning technology to monitor the time calves spend standing and
lying down as a predictor of calf health, thereby enabling real-time monitoring of calves to
improve the level of calf-rearing management.
The aim of this study was threefold: (1) to establish an identification system based on
deep learning algorithms to recognise both standing and lying behaviours of new-born
calves, (2) to predict the standing and lying time of new-born calves on a daily basis, and
(3) to compare the daily performance of healthy calves to those calves afflicted with di-
arrhoea. The results of this study would assist in the development of a computer vision
system which could help breeders to promptly identify new-born calves with abnormal
standing and lying behaviours compared to healthy calves.
2. Materials and Methods
All protocols involving animals were approved by the Experimental Animal Care
and Use Committee of the Institute of Animal Sciences, Chinese Academy of Agricultural
Sciences, Beijing, China (approval number IAS2021-220).
This section explains the process of data acquisition, the data preprocessing steps, the
dataset construction approach, and the implementation of the calf-behaviour prediction
method. Figure 1shows the flowchart of the entire procedure.
Animals 2024,14, 1324 3 of 15
Animals 2024, 14, x FOR PEER REVIEW 3 of 16
Figure 1. The flowchart of the research.
2.1. The Setup of Experiments
Fieldwork was conducted from 19 to 24 May 2021, at Yinxiangweiye Dairy Farm in
Shandong Province, China, where more than 1000 Holstein cows and around 100 calves
were raised. New-born (born within 15 days) calves were separately housed in calf cages,
where they were disturbed at the lowest limit. For this study, six new-born calves were
randomly selected, including four healthy calves (ID: 9350, 9347, 9348, 9349) and two sick
calves (ID: 9389, 9351) (Table 1). All new-born calves participating in the experiment were
reared under the same conditions as other new-born calves on the farm during the exper-
imental period. Calves that were not found to have any disease throughout the experi-
ment were defined as healthy calves. Calves with diarrhoea were classified as sick calves.
Diarrhoea was diagnosed based on faeces observation and an elevated rectal temperature.
Two types of cages are used on the farm, which are cage A and cage B. Cage A is 3 m long,
2.5 m wide, and 3 m high. Cage B is 2.5 m long, 2 m wide, and 2 m high. The floors of both
were bedded with a 15 cm thick layer of dry rice bran that was refreshed every 25 days.
During the six recording days, five calves were housed in cage type A, while the other calf
was housed in cage type B (Figure 2). They were all provided with unlimited forage and
fed 4-L-milks with a bucket twice a day at 7 a.m. and 5 p.m., respectively, and given 300
millilitres of water with a bucket at 7 a.m., 12 a.m., and 5 p.m., respectively. There was a
ventilation system in the shed. The daily average temperature and humidity index ranged
from 73.14 to 81.06 during the experiment. Our experiment did not change the daily living
environment of the calf, but only added a camera outside the calf cage to collect data.
Figure 1. The flowchart of the research.
2.1. The Setup of Experiments
Fieldwork was conducted from 19 to 24 May 2021, at Yinxiangweiye Dairy Farm in
Shandong Province, China, where more than 1000 Holstein cows and around
100 calves were raised. New-born (born within 15 days) calves were separately housed in
calf cages, where they were disturbed at the lowest limit. For this study, six new-born calves
were randomly selected, including four healthy calves (ID: 9350, 9347, 9348, 9349) and
two sick calves (ID: 9389, 9351) (Table 1). All new-born calves participating in the experi-
ment were reared under the same conditions as other new-born calves on the farm during
the experimental period. Calves that were not found to have any disease throughout
the experiment were defined as healthy calves. Calves with diarrhoea were classified as
sick calves. Diarrhoea was diagnosed based on faeces observation and an elevated rectal
temperature. Two types of cages are used on the farm, which are cage A and cage B. Cage
A is 3 m long, 2.5 m wide, and 3 m high. Cage B is 2.5 m long, 2 m wide, and 2 m high. The
floors of both were bedded with a 15 cm thick layer of dry rice bran that was refreshed
every 25 days. During the six recording days, five calves were housed in cage type A, while
the other calf was housed in cage type B (Figure 2). They were all provided with unlimited
forage and fed 4-L-milks with a bucket twice a day at 7 a.m. and 5 p.m., respectively, and
given 300 millilitres of water with a bucket at 7 a.m., 12 a.m., and 5 p.m., respectively. There
was a ventilation system in the shed. The daily average temperature and humidity index
ranged from 73.14 to 81.06 during the experiment. Our experiment did not change the daily
living environment of the calf, but only added a camera outside the calf cage to collect data.
Animals 2024,14, 1324 4 of 15
Table 1. The profiles of the six calves for data acquisition.
ID Birth Date (m.dd) Physical
Condition Weight (kg) Cage Type
9389 5.15 Diarrhoea 30 A
9350 5.09 Healthy 38 A
9349 5.08 Healthy 36 A
9348 5.07 Healthy 36 A
9351 5.09 Diarrhoea 31 A
9347 5.07 Healthy 36 B
Animals 2024, 14, x FOR PEER REVIEW 4 of 16
Figure 2. Schematic of the video recording process.
Table 1. The profiles of the six calves for data acquisition.
ID Birth Date (m.dd) Physical Condition Weight (kg) Cage Type
9389 5.15 Diarrhoea 30 A
9350 5.09 Healthy 38 A
9349 5.08 Healthy 36 A
9348 5.07 Healthy 36 A
9351 5.09 Diarrhoea 31 A
9347 5.07 Healthy 36 B
Figure 2 illustrates the procedure of data acquisition. Raw data were acquired by
using six Hikvision cameras (DS-3347WD-L, Hikvision Digital Technology Co., Ltd.,
Hangzhou, China) with a resolution of 2580 × 1440 pixels at 25 HZ. Because this study
requires 24/7 data collection, it is necessary to ensure that the quality of the data collected
during the day and at night is the same. Therefore, it is necessary to choose a camera with
an infrared camera function to collect data. Moreover, due to lighting reasons, to obtain
high-quality nighttime data it is necessary to choose a higher-resolution camera. This cam-
era has an infrared camera function and high resolution, and the price is relatively low.
To consider the field of view over the full cage, each camera was deployed at one of four
spots in the cages to record data (details are given in Experimental setup instructions
(Supplementary Materials)). The raw data were 1080 videos, each with a length of approx-
imately 49 min, resulting in 30 videos representing a recording of 1 calf-day.
2.2. Dataset Construction
A dataset of 3406 images was constructed. Images were selected following four con-
ditions: (1) each calf in the experiment was covered, (2) all four recording angles were
included, (3) abundant patterns of lying and standing behaviours were involved, (4) and
scenes under various illumination strengths were recorded. To build the dataset, 330 vid-
eos were manually selected. From each video, key-frames that acted for the action transi-
tions of calves were extracted by OpenCV using a diff function. Then, a manual selection
was carried out to reduce duplication. The dataset comprises 1700 images of standing and
1706 images of lying. Images were labelled as standing or “lying” according to their
content by using LabelImg(version: 1.8.6). Labels were saved in the format of txt format
that was suitable for the training of YOLO. Finally, the dataset was split into training and
validation sets in a ratio of 9.5 to 0.5 by shuffling and selecting images.
2.3. Model Training
To classify the standing and lying behaviour of calves, a YOLOv8n model was
trained via transfer learning. Compared to the preceding YOLO versions, YOLOv8 has
Figure 2. Schematic of the video recording process.
Figure 2illustrates the procedure of data acquisition. Raw data were acquired by using
six Hikvision cameras (DS-3347WD-L, Hikvision Digital Technology Co., Ltd., Hangzhou,
China) with a resolution of 2580
×
1440 pixels at 25 HZ. Because this study requires
24/7 data collection, it is necessary to ensure that the quality of the data collected during
the day and at night is the same. Therefore, it is necessary to choose a camera with an
infrared camera function to collect data. Moreover, due to lighting reasons, to obtain high-
quality nighttime data it is necessary to choose a higher-resolution camera. This camera has
an infrared camera function and high resolution, and the price is relatively low. To consider
the field of view over the full cage, each camera was deployed at one of four spots in the
cages to record data (details are given in Experimental setup instructions (Supplementary
Materials)). The raw data were 1080 videos, each with a length of approximately 49 min,
resulting in 30 videos representing a recording of 1 calf-day.
2.2. Dataset Construction
A dataset of 3406 images was constructed. Images were selected following four
conditions: (1) each calf in the experiment was covered, (2) all four recording angles were
included, (3) abundant patterns of lying and standing behaviours were involved, (4) and
scenes under various illumination strengths were recorded. To build the dataset, 330 videos
were manually selected. From each video, key-frames that acted for the action transitions
of calves were extracted by OpenCV using a diff function. Then, a manual selection was
carried out to reduce duplication. The dataset comprises 1700 images of standing and
1706 images of lying. Images were labelled as “standing” or “lying” according to their
content by using LabelImg (version: 1.8.6). Labels were saved in the format of txt format
that was suitable for the training of YOLO. Finally, the dataset was split into training and
validation sets in a ratio of 9.5 to 0.5 by shuffling and selecting images.
2.3. Model Training
To classify the standing and lying behaviour of calves, a YOLOv8n model was trained
via transfer learning. Compared to the preceding YOLO versions, YOLOv8 has structural
Animals 2024,14, 1324 5 of 15
and architectural improvements and can recognise dynamic objects faster and more accu-
rately. Among the YOLOv8 series, YOLOv8n, as the smallest model, whose size is less than
10 MB, runs the fastest, which is very consistent with our need to observe the standing and
lying behaviour of calves in real time. The batch size and epochs were initialised as 48 and
300, respectively. The model was trained on a computer with an 8-core CPU and a 16GB
Tesla T4 GPU. Figure 3demonstrates the training pipeline.
Animals 2024, 14, x FOR PEER REVIEW 5 of 16
structural and architectural improvements and can recognise dynamic objects faster and
more accurately. Among the YOLOv8 series, YOLOv8n, as the smallest model, whose size
is less than 10 MB, runs the fastest, which is very consistent with our need to observe the
standing and lying behaviour of calves in real time. The batch size and epochs were ini-
tialised as 48 and 300, respectively. The model was trained on a computer with an 8-core
CPU and a 16GB Tesla T4 GPU. Figure 3 demonstrates the training pipeline.
Figure 3. Flowchart of the model training process.
2.3.1. Loss Function
The loss values of YOLOv8n in the recognition of standing and lying behaviour of
calves include the classification loss of standing and lying behaviour (𝐸𝑟𝑟𝑜𝑟), distribu-
tion focal loss (𝐸𝑟𝑟𝑜𝑟 ), and bounding box location loss (𝐸𝑟𝑟𝑜𝑟 ), respectively.
𝐸𝑟𝑟𝑜𝑟 is the loss calculated from the difference between the predicted probabilities and
the actual class labels. It measures how well the model is able to classify each object.
𝐸𝑟𝑟𝑜𝑟 is a modified version of focal loss used to address class imbalance by focusing
more on hard-to-classify examples. It adjusts the focus based on the correctness of the
class probability distribution, helping to prioritise the learning on misclassified instances.
𝐸𝑟𝑟𝑜𝑟 measures the difference between the predicted bounding boxes (which define
the location and size of the object) and the actual ground truth bounding boxes. It is crucial
for accurately determining where objects are located in the image. The loss function is
shown in Formula (1).
𝐿𝑜𝑠𝑠 = 𝐸𝑟𝑟𝑜𝑟 +𝐸𝑟𝑟𝑜𝑟 +𝐸𝑟𝑟𝑜𝑟 (1)
𝐼𝑂𝑈 is the intersection over the union of 𝐴 (the real bounding box of the frame) to
𝐵 (the predicted bounding box of the frame). The symbols ” and “” represent mathe-
matical operations on 𝐴 and 𝐵: ”” denotes the intersection of 𝐴 and 𝐵. In the case of
bounding boxes, it refers to the area that is overlapped with both the predicted bounding
box (𝐵) and the actual bounding box (𝐴). “denotes the union of 𝐴 and 𝐵. For bounding
boxes, it refers to the total area covered by both bounding boxes combined by either
bounding box (𝐴) or bounding box (𝐵), which is defined as Formula (2)‚
𝐼𝑂𝑈 =
𝐴
∩𝐵
𝐴
∪𝐵 (2)
2.3.2. Evaluation Metrics
This work uses 𝐹1-score and mean Average Precision (𝑚𝐴𝑃) as the evaluation met-
rics to evaluate the model performance of YOLOv8n for calf standing and lying behaviour
classification and recognition. The evaluation metric formulas are shown in the following
equations,
Figure 3. Flowchart of the model training process.
2.3.1. Loss Function
The loss values of YOLOv8n in the recognition of standing and lying behaviour of
calves include the classification loss of standing and lying behaviour (
Errorcls
), distribution
focal loss (
Errord f l
), and bounding box location loss (
Errorbbo x
), respectively.
Errorcls
is the
loss calculated from the difference between the predicted probabilities and the actual class
labels. It measures how well the model is able to classify each object.
Errord f l
is a modified
version of focal loss used to address class imbalance by focusing more on hard-to-classify
examples. It adjusts the focus based on the correctness of the class probability distribution,
helping to prioritise the learning on misclassified instances.
Errorbbox
measures the difference
between the predicted bounding boxes (which define the location and size of the object) and
the actual ground truth bounding boxes. It is crucial for accurately determining where objects
are located in the image. The loss function is shown in Formula (1).
Loss =Errorcls +Errord f l +Errorbbo x (1)
IOU
is the intersection over the union of
A
(the real bounding box of the frame)
to
B
(the predicted bounding box of the frame). The symbols
and
represent
mathematical operations on
A
and
B
:
denotes the intersection of
A
and
B
. In the
case of bounding boxes, it refers to the area that is overlapped with both the predicted
bounding box (
B
) and the actual bounding box (
A
).
denotes the union of
A
and
B
. For
bounding boxes, it refers to the total area covered by both bounding boxes combined by
either bounding box (A) or bounding box (B), which is defined as Formula (2)‚
IOU =AB
AB(2)
2.3.2. Evaluation Metrics
This work uses
F
1-score and mean Average Precision (
mAP
) as the evaluation metrics
to evaluate the model performance of YOLOv8n for calf standing and lying behaviour
classification and recognition. The evaluation metric formulas are shown in the follow-
ing equations,
Precision =TP
TP +FP (3)
Animals 2024,14, 1324 6 of 15
Recall =TP
TP +F N (4)
F1=2
Precision1+Recal l1(5)
where
TP
denotes the number of True Positives,
FP
denotes the False Positives, and
FN
denotes the False Negatives.
Average precision (
AP
) is one of the standard metrics to evaluate the sensitivity of
the model recognition, which is calculated using the Precision–Recall curve (PR), which
describes both precision and recall for different object detector confidence thresholds. The
AP metric is calculated as Formula (6).
The mean average precision (
mAP
) is the average of
AP
values for all classes; the
formula is shown in (7). In this application there are classes. The higher the
mAP
value,
the better the object detector performance. Different
mAP
metrics can be specified based
on the level of detecting bounding box overlap with the ground truth: mAP50 denotes the
average of AP when the IOU 50%, and mAP50–95 denotes 50% IOU 95%.
AP =Z1
0P(R)dR (6)
mAP =C
i=1APi
C(7)
Frames per second (
FPS
) denotes the prediction speed of the model,
N
is the total
number of the predicted video frames, and
tN
is the total time taken by the model to predict
the video. The FPS formula is shown in (8).
FPS =N
tN
(8)
2.4. External Validation on 8 Calf-Days of 24 h Labelled Data
A trained model requires simple internal tests using a test set to verify its ability to
classify. However, the performance of the test set does not guarantee the applicability of our
system in practice and the generalisability of the model cannot be demonstrated. To prove
that the model can be used in practical application scenarios, it is very important to verify
the reliability and the effectiveness of the proposed method in practice [
15
]. Therefore,
the performance of the model on the test set is not described in this paper. To verify
the reliability of the YOLOv8n-based system in practice, the study randomly selected
8 calf-days of 24 h video data collected in a real breeding environment to manually count
the daily standing and lying time of calves (for ground truth see Table 2). Then, 4 out of
8 calf-days’ data with the maximum predictive error were selected, manually annotated
with daily standing and lying behaviour, and visualised using scatter plots. The scatter plots
of these 4 calf-days were compared with the predicted results of the system to demonstrate
the reliability of the system. Table 2lists the details of the ground-truth annotation.
Table 2. Ground-truth standing and lying time of 8-calf-days’ data.
ID Selected Date
(dd.hh–dd.hh)
Start Time
(yyyymmdd:hhmmss)
Standing Time
(h:mm:ss)
Lying Time
(hh:mm:ss)
Total Time
(hh:mm:ss)
9347 21.10–22.10 20210521:105926 5:52:43 18:06:54 23:59:37
9350 23.11–24.11 20210523:115547 5:33:39 18:28:22 24:02:01
9348 19.12–20.12 20210519:123442 5:07:25 18:52:19 23:59:44
9349 19.12–20.12 20210519:121701 5:21:51 18:42:10 24:04:01
9389 19.11–20.11 20210519:115715 3:37:16 20:26:36 24:03:52
9389 21.16–22.16 20210521:162747 4:08:04 19:51:56 23:59:45
9351 19.12–20.12 20210519:122813 3:39:29 20:22:35 24:02:04
9351 23.11–24.11 20210523:111728 3:47:54 20:14:05 24:01:59
Animals 2024,14, 1324 7 of 15
Ground truth represents the total time of 30 videos within 24 h (camera display time)
from the start of video recording. The reason why the total time here is not equal to
24 h is that the camera automatically saves a day’s videos into about 49 min each, in total
30 videos a day. However, when saving between videos, some videos overlap and some
videos lose frames, resulting in a duration not exactly equal to 24 h.
Due to the method’s use of the YOLOv8n classification to identify the standing and
lying behaviour frames of calves, the formula for calculating the standing and lying time of
calves in the system is defined as follows:
Time =30
i=1
Fra mes o f Videoi
FPSi
(9)
Standing Time =Total number o f Standin g Frames
FPS (10)
Lying Time =Tota l n umber o f Lyi ng Fr ames
FPS (11)
where Frames of Video
i
represents the total standing and lying time of each calf in 30 videos
within 24 h. FPS indicates the frame rate of the video, which is 25.
This study used the YOLOv8n-based system to predict 8 calf-days’ daily standing
and lying behaviour time video data of calves with the same data of the manually counted
as mentioned above in order to calculate the corresponding time. As a demonstration of
the system’s detection, this study also selected the behaviour distribution corresponding
to manual annotations for 4 calf-days and output it as a scatter plot. These 4 calf-days’
distributions are used to compare with the scatter plot distribution of the corresponding
ground truth. The system prediction flow chart is shown as Figure 4.
Animals 2024, 14, x FOR PEER REVIEW 7 of 16
9349 19.12–20.12 20210519:121701 5:21:51 18:42:10 24:04:01
9389 19.11–20.11 20210519:115715 3:37:16 20:26:36 24:03:52
9389 21.16–22.16 20210521:162747 4:08:04 19:51:56 23:59:45
9351 19.12–20.12 20210519:122813 3:39:29 20:22:35 24:02:04
9351 23.11–24.11 20210523:111728 3:47:54 20:14:05 24:01:59
Ground truth represents the total time of 30 videos within 24 h (camera display time)
from the start of video recording. The reason why the total time here is not equal to 24 h
is that the camera automatically saves a day’s videos into about 49 min each, in total 30
videos a day. However, when saving between videos, some videos overlap and some vid-
eos lose frames, resulting in a duration not exactly equal to 24 h.
Due to the method’s use of the YOLOv8n classification to identify the standing and
lying behaviour frames of calves, the formula for calculating the standing and lying time
of calves in the system is defined as follows:
𝑇𝑖𝑚𝑒 = 𝐹𝑟𝑎𝑚𝑒𝑠 𝑜𝑓 𝑉𝑖𝑑𝑒𝑜
𝐹𝑃𝑆

 (9)
𝑆𝑡𝑎𝑛𝑑𝑖𝑛𝑔 𝑇𝑖𝑚𝑒 = 𝑇𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑆𝑡𝑎𝑛𝑑𝑖𝑛𝑔 𝐹𝑟𝑎𝑚𝑒𝑠
𝐹𝑃𝑆 (10)
𝐿𝑦𝑖𝑛𝑔 𝑇𝑖𝑚𝑒 = 𝑇𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐿𝑦𝑖𝑛𝑔 𝐹𝑟𝑎𝑚𝑒𝑠
𝐹𝑃𝑆 (11)
where Frames of Video
i
represents the total standing and lying time of each calf in 30
videos within 24 h. FPS indicates the frame rate of the video, which is 25.
This study used the YOLOv8n-based system to predict 8 calf-days’ daily standing
and lying behaviour time video data of calves with the same data of the manually counted
as mentioned above in order to calculate the corresponding time. As a demonstration of
the system’s detection, this study also selected the behaviour distribution corresponding
to manual annotations for 4 calf-days and output it as a scatter plot. These 4 calf-days’
distributions are used to compare with the scatter plot distribution of the corresponding
ground truth. The system prediction flow chart is shown as Figure 4.
Figure 4. The system’s flow chart for predicting calf standing and lying time in the real environ-
ment.
2.5. Automatic Prediction on 17 Calf-Days of 24 h Data
After verifying the generalisation ability of the system based on YOLOv8n, an auto-
matic prediction of calves’ standing and lying behaviours was implemented to compare
Figure 4. The system’s flow chart for predicting calf standing and lying time in the real environment.
2.5. Automatic Prediction on 17 Calf-Days of 24 h Data
After verifying the generalisation ability of the system based on YOLOv8n, an auto-
matic prediction of calves’ standing and lying behaviours was implemented to compare
the differences in standing and lying time between healthy and sick calves. The study used
this system to automatically predict six calves’ standing and lying time data for a total of
17 calf-days.
2.6. Statistical Analysis
Linear regression analysis was performed to verify the statistical relationship of the
system prediction with the ground truth on the 8 calf-days of 24 h data. The results
were shown with R
2
and RMSE. When the effectiveness of the system was verified, the
model was further used to predict untrained 17 calf-days’ data to explore the behavioural
difference between healthy and sick calves. Statistical significance was defined at p< 0.05.
Animals 2024,14, 1324 8 of 15
3. Results
In this study, we presented a calf standing and lying behaviour identification system
based on the YOLOv8n algorithm that can quantitatively predict the daily standing and
lying time of calves. The system was used to predict the daily standing and lying time of
new-born calves for 17 calf-days. The daily standing and lying times of healthy and sick
calves predicted by the system were then analysed and compared.
3.1. Model Performance
When the epochs reached 219, the model stopped training early as no improvement
was observed. The training results are basically stable and the network converges so
the training ends automatically, the precision obtained is excellent, and the loss value is
minimised in the validation set. Figure 5shows the performance of training loss, validation
loss, precision, recall, mAP50 and mAP50–95 during the training and validation process,
with the best model saved as best.pt.
Animals 2024, 14, x FOR PEER REVIEW 8 of 16
the differences in standing and lying time between healthy and sick calves. The study
used this system to automatically predict six calvesstanding and lying time data for a
total of 17 calf-days.
2.6. Statistical Analysis
Linear regression analysis was performed to verify the statistical relationship of the
system prediction with the ground truth on the 8 calf-days of 24 h data. The results were
shown with R
2
and RMSE. When the effectiveness of the system was verified, the model
was further used to predict untrained 17 calf-days’ data to explore the behavioural differ-
ence between healthy and sick calves. Statistical significance was defined at p < 0.05.
3. Results
In this study, we presented a calf standing and lying behaviour identification system
based on the YOLOv8n algorithm that can quantitatively predict the daily standing and
lying time of calves. The system was used to predict the daily standing and lying time of
new-born calves for 17 calf-days. The daily standing and lying times of healthy and sick
calves predicted by the system were then analysed and compared.
3.1. Model Performance
When the epochs reached 219, the model stopped training early as no improvement
was observed. The training results are basically stable and the network converges so the
training ends automatically, the precision obtained is excellent, and the loss value is min-
imised in the validation set. Figure 5 shows the performance of training loss, validation
loss, precision, recall, mAP50 and mAP5095 during the training and validation process,
with the best model saved as best.pt.
Figure 5. Training curves, training loss, and validation performance. Epochs refers to the number
of times the entire dataset is passed through the neural network during training. Multiple epochs
are often required to adequately train the model and optimise its performance on the dataset.
As shown in Figure 6, when the model achieved optimal precision in recognising calf
standing and lying behaviours, the maximum F1 score was obtained at a confidence
threshold of 0.793.
Figure 5. Training curves, training loss, and validation performance. Epochs refers to the number of
times the entire dataset is passed through the neural network during training. Multiple epochs are
often required to adequately train the model and optimise its performance on the dataset.
As shown in Figure 6, when the model achieved optimal precision in recognising
calf standing and lying behaviours, the maximum F1 score was obtained at a confidence
threshold of 0.793.
Animals 2024, 14, x FOR PEER REVIEW 9 of 16
Figure 6. The performance of F1-confidence curve.
The best model was observed at epoch 169, at which the recall reached 1 and mAP50
reached 99.5%. Table 3 shows the performance of the validation set under the best model.
In the 170 images of validation sets, there are 112 images of standing calves and 56 images
of lying calves. Among these 170 validation images, two of them were occluded images
that did not capture calves (in one image, the calf was severely obstructed by the breeder;
in the other image, there was no calf, showing grey shadows). The validation results are
all correct. Meanwhile, the recognition speed of the model can reach 333 FPS in the process
of video recognition.
Table 3. The performance of the best model.
Model Average Precision mAP50 mAP50–95 FPS Size (MB)
Lying Standing
YOLOv8n 0.998 0.991 0.995 0.901 333 5.92
Observing the classification results of the validation set data using the best model
obtained, all images in the validation set showed correct recognition and classification of
calves’ standing and lying behaviours. A random selection of 16 frames in the validation
set is displayed in Figure 7 to demonstrate the prediction results.
Figure 6. The performance of F1-confidence curve.
Animals 2024,14, 1324 9 of 15
The best model was observed at epoch 169, at which the recall reached 1 and mAP50
reached 99.5%. Table 3shows the performance of the validation set under the best model.
In the 170 images of validation sets, there are 112 images of standing calves and 56 images
of lying calves. Among these 170 validation images, two of them were occluded images
that did not capture calves (in one image, the calf was severely obstructed by the breeder;
in the other image, there was no calf, showing grey shadows). The validation results are all
correct. Meanwhile, the recognition speed of the model can reach 333 FPS in the process of
video recognition.
Observing the classification results of the validation set data using the best model
obtained, all images in the validation set showed correct recognition and classification of
calves’ standing and lying behaviours. A random selection of 16 frames in the validation
set is displayed in Figure 7to demonstrate the prediction results.
Figure 7. Four sets of images (ad) represent the Qualitative results of 16 verified example frames.
The manual annotations are given at the top of each set of images, while the predictions are at the
bottom of each set.3.2. External Validation on 8 Days of 24 h Labelled Data.
Animals 2024,14, 1324 10 of 15
Table 3. The performance of the best model.
Model
Average Precision
mAP50 mAP50–95 FPS Size (MB)
Lying Standing
YOLOv8n 0.998 0.991 0.995 0.901 333 5.92
3.2. External Validation on 8 Days of 24 h Labelled Data
3.2.1. Prediction of Calf Standing and Lying Time over Untrained 8 Calf-Days
The largest error in daily standing time in the predicted results comes from Calf ID:
9351.19.12-20.12. The daily standing time is 592 s longer than the actual time, and the daily
lying time is 799 s shorter than the actual time. The second largest error comes from the
calf ID: 9350.23.11-24.11, where the daily standing time is 612 s less than the actual time
and the daily lying time is 494 s more than the actual time. The third major error is ID:
9349.19.12-20.12. The daily standing time is 159 s shorter than the actual time, and the daily
lying time is 79 s shorter than actual time. The error in daily standing and lying time for
the other calf-days is within 211 s.
From the daily standing and lying behaviour time of six calves, with results predicted
by the system for a total of 8 calf-days, it can be observed that the predicted values of the
system are very close to the ground truth. To confirm whether the distribution of daily
standing and lying behaviours predicted by the system is consistent with the ground truth,
a scatter plot is illustrated in Figure 8based on the 4 calf-days with the maximum predictive
errors.
Animals 2024, 14, x FOR PEER REVIEW 11 of 16
lying time is 799 s shorter than the actual time. The second largest error comes from the
calf ID: 9350.23.11-24.11, where the daily standing time is 612 s less than the actual time
and the daily lying time is 494 s more than the actual time. The third major error is ID:
9349.19.12-20.12. The daily standing time is 159 s shorter than the actual time, and the
daily lying time is 79 s shorter than actual time. The error in daily standing and lying time
for the other calf-days is within 211 s.
From the daily standing and lying behaviour time of six calves, with results predicted
by the system for a total of 8 calf-days, it can be observed that the predicted values of the
system are very close to the ground truth. To confirm whether the distribution of daily
standing and lying behaviours predicted by the system is consistent with the ground
truth, a scatter plot is illustrated in Figure 8 based on the 4 calf-days with the maximum
predictive errors.
Figure 8. The 24 h behaviour distribution of the system’s automatic prediction and manually ob-
served statistics in the four worst-predicted data segments (ad). Standing and lying are represented
by 1 and 0, respectively.
According to Figure 8, the ground truth and the prediction are largely consistent in
behavioural distribution without significant misclassification between standing and lying
behaviours, confirming the good predictive performance of the proposed method.
Figure 8. The 24 h behaviour distribution of the system’s automatic prediction and manually observed
statistics in the four worst-predicted data segments (ad). Standing and lying are represented by
1 and 0, respectively.
Animals 2024,14, 1324 11 of 15
According to Figure 8, the ground truth and the prediction are largely consistent in
behavioural distribution without significant misclassification between standing and lying
behaviours, confirming the good predictive performance of the proposed method.
3.2.2. OLS Regression Analysis on the Standing and Lying Time of Calves Based on the
Ground Truth and Prediction over 8 Calf-Days of Data
In the study, the Ordinary Least Squares (OLS) regression method was used to perform
a regression analysis on the predicted and ground truth of the daily lying time, number
of lying bouts, and average bout duration data of calves for different 8 calf-days. In the
graph of each OLS regression analysis result, the horizontal axis represents the system
predicted results, and the vertical axis represents the ground truth. The results showed that
the OLS regression model fitted well. For the lying time, R
2
is 0.993 and the Root Mean
Square Error (RMSE) is 5.83; for the lying bouts, R
2
is 0.811 and the RMSE is 1.06; for the
bout duration, R
2
is 0.693 and the RMSE is 5.11, which indicates that the predicted values
of the system regarding the lying time of calves are very close to the ground truth, and the
system has good predictive ability. The data validation of the standing and lying time of
randomly selected calves in the total of 8 calf-days shows that the system is reliable. The
OLS regression analysis results are shown in Figure 9.
Animals 2024, 14, x FOR PEER REVIEW 12 of 16
3.2.2. OLS Regression Analysis on the Standing and Lying Time of Calves Based on the
Ground Truth and Prediction over 8 Calf-Days of Data
In the study, the Ordinary Least Squares (OLS) regression method was used to per-
form a regression analysis on the predicted and ground truth of the daily lying time, num-
ber of lying bouts, and average bout duration data of calves for different 8 calf-days. In
the graph of each OLS regression analysis result, the horizontal axis represents the system
predicted results, and the vertical axis represents the ground truth. The results showed
that the OLS regression model fitted well. For the lying time, R
2
is 0.993 and the Root Mean
Square Error (RMSE) is 5.83; for the lying bouts, R
2
is 0.811 and the RMSE is 1.06; for the
bout duration, R
2
is 0.693 and the RMSE is 5.11, which indicates that the predicted values
of the system regarding the lying time of calves are very close to the ground truth, and
the system has good predictive ability. The data validation of the standing and lying time
of randomly selected calves in the total of 8 calf-days shows that the system is reliable.
The OLS regression analysis results are shown in Figure 9.
Figure 9. OLS regression analysis results of predicted lying time, lying bouts, and bout duration
over 8-calf-days (ac).
3.3. Practical Implementation Using 17Calf-Days of 2 h Data
Due to the excellent performance that the proposed system demonstrated in the 8-
calf-day application, it was further applied to predict the standing and lying behaviour
time of calves collected in actual scenarios using 17 calf-days of video data on which the
behaviour classification model was not previously trained, analysing the difference be-
tween the standing and lying time of sick calves and healthy calves based on the predicted
results. The results showed that compared to healthy calves, calves with diarrhoea had
about two hours less standing time per day, corresponding to about two hours more lying
time per day, 2.65 more lying bouts per day, and 4.3 min less lying time bout duration.
When the Wilcoxon rank sum test was performed on the daily lying time, lying bouts,
and bout duration data of calves, three p-values of 0.002, 0.049, 0.50 were obtained, respec-
tively. The p-values of daily lying time and lying bouts are both less than 0.05, indicating
a significant level of outcome. The bout duration p-value is more than 0.05, indicating that
the results are not significant. This indicates that there are significant differences in daily
lying time and number of lying bouts between healthy calves and calves with diarrhoea,
and the difference in average bout duration is not significant between healthy calves and
calves with diarrhoea. The results are shown in Figure 10.
Figure 9. OLS regression analysis results of predicted lying time, lying bouts, and bout duration over
8-calf-days (ac).
3.3. Practical Implementation Using 17Calf-Days of 2 h Data
Due to the excellent performance that the proposed system demonstrated in the 8-calf-
day application, it was further applied to predict the standing and lying behaviour time of
calves collected in actual scenarios using 17 calf-days of video data on which the behaviour
classification model was not previously trained, analysing the difference between the standing
and lying time of sick calves and healthy calves based on the predicted results. The results
showed that compared to healthy calves, calves with diarrhoea had about two hours less
standing time per day, corresponding to about two hours more lying time per day, 2.65 more
lying bouts per day, and 4.3 min less lying time bout duration.
When the Wilcoxon rank sum test was performed on the daily lying time, lying
bouts, and bout duration data of calves, three p-values of 0.002, 0.049, 0.50 were obtained,
respectively. The p-values of daily lying time and lying bouts are both less than 0.05,
indicating a significant level of outcome. The bout duration p-value is more than 0.05,
indicating that the results are not significant. This indicates that there are significant
differences in daily lying time and number of lying bouts between healthy calves and
calves with diarrhoea, and the difference in average bout duration is not significant between
healthy calves and calves with diarrhoea. The results are shown in Figure 10.
Animals 2024,14, 1324 12 of 15
Animals 2024, 14, x FOR PEER REVIEW 13 of 16
Figure 10. Daily lying time, number of lying bouts, and average bout duration using 17 calf-days of
data. Differences between healthy and sick calves are denoted by * (p 0.05) (ac).
4. Discussion
In this study, the aim was to achieve a robust behavioural classification and monitor-
ing system for individually housed calves. The study used this system to predict the daily
standing and lying time of new-born calves. It compared the daily standing and lying
behaviour of healthy calves and calves with diarrhoea through predicted results. To
achieve these aims, the ability of the trained deep learning model was validated through
detecting daily standing and lying behaviour in real settings, including both healthy and
sick calves. To the best of our knowledge, this study represents the first application of
vision-based deep learning methods specifically to non-invasive monitoring of daily
standing and lying behaviour time in calves.
4.1. Performance in Classifying Daily Standing and Lying Time of Calves
Generally speaking, the more training samples there are and the more diverse they
are, the better the generalisability of the trained network will be. Therefore, we fixed the
video cameras at different angles and heights to obtain diverse data for training YOLOv8
models. The results in the testing data show extremely good outcomes with almost no
misclassification. The mAP was as high as 0.995, which is consistent with previous studies
using YOLO networks to detect and classify cow behaviours [9,16]. Our work once again
demonstrates the strong ability of YOLO algorithms in detecting animal behaviours.
As pointed out by Cheng et al. [15], there is a lack of practical applications in relevant
studies. In this study, we validated the trained YOLO model by comparing the inference
results with the ground truth on the daily standing and lying time using 8 calf-days’ data.
The strong statistical relationship between the inference results and the ground truth sug-
gests that our proposed method works effectively in classifying standing and lying be-
haviours on a daily basis.
When analysing the reasons for the misclassification, the 2 days data with the highest
errors were all subject to blind spots during shooting. For example, for the side-view cameras
whose field of view covered the length of the nearest side of the cages, the calf’s head protrud-
ing from the cage led to an incomplete calf body in frames and subsequent error in behavioural
recognition. It should be noted that when installing cameras in the experiment, care must be
taken to avoid blind spots in the field of view. The error in daily standing and lying time can
be further lowered to less than 5 min when only summarising the 6 days’ data without blind
Figure 10. Daily lying time, number of lying bouts, and average bout duration using 17 calf-days of
data. Differences between healthy and sick calves are denoted by * (p0.05) (ac).
4. Discussion
In this study, the aim was to achieve a robust behavioural classification and monitoring
system for individually housed calves. The study used this system to predict the daily
standing and lying time of new-born calves. It compared the daily standing and lying
behaviour of healthy calves and calves with diarrhoea through predicted results. To achieve
these aims, the ability of the trained deep learning model was validated through detecting
daily standing and lying behaviour in real settings, including both healthy and sick calves.
To the best of our knowledge, this study represents the first application of vision-based
deep learning methods specifically to non-invasive monitoring of daily standing and lying
behaviour time in calves.
4.1. Performance in Classifying Daily Standing and Lying Time of Calves
Generally speaking, the more training samples there are and the more diverse they
are, the better the generalisability of the trained network will be. Therefore, we fixed the
video cameras at different angles and heights to obtain diverse data for training YOLOv8
models. The results in the testing data show extremely good outcomes with almost no
misclassification. The mAP was as high as 0.995, which is consistent with previous studies
using YOLO networks to detect and classify cow behaviours [
9
,
16
]. Our work once again
demonstrates the strong ability of YOLO algorithms in detecting animal behaviours.
As pointed out by Cheng et al. [
15
], there is a lack of practical applications in relevant
studies. In this study, we validated the trained YOLO model by comparing the inference
results with the ground truth on the daily standing and lying time using 8 calf-days’
data. The strong statistical relationship between the inference results and the ground truth
suggests that our proposed method works effectively in classifying standing and lying
behaviours on a daily basis.
When analysing the reasons for the misclassification, the 2 days’ data with the highest
errors were all subject to blind spots during shooting. For example, for the side-view
cameras whose field of view covered the length of the nearest side of the cages, the calf’s
head protruding from the cage led to an incomplete calf body in frames and subsequent
error in behavioural recognition. It should be noted that when installing cameras in the
experiment, care must be taken to avoid blind spots in the field of view. The error in daily
standing and lying time can be further lowered to less than 5 min when only summarising
the 6 days’ data without blind spots. Thus, it can be speculated that the proposed method
should work better with proper camera mounting in practice.
Animals 2024,14, 1324 13 of 15
4.2. Behavioural Differences between Healthy and Sick Calves
As the proposed system had showed its performance in the 8-calf-days application, it
was further applied to the entire 17 calf-days of data to gain insights into the classification
of health conditions. New-born calves have been reported to have a decreased lying time
with age [
14
]. Our healthy calves, aged 10
±
2 days, had a median daily lying time of
17.9 h, which was consistent with the results of Goharshahi et al. [
14
] where calves of
similar ages were used.
As expected, the healthy calves spent a shorter time lying than the sick calves by
an average of 2 h per day. This result falls in line with, but is twice the difference of the
study of Goharshahi et al., where calves with diarrhoea had a 67.2 min longer average
daily lying time on the onset day compared with control calves [
14
]. In addition, the
observed increased lying bouts in calves with diarrhoea are consistent with the study of
Swartz et al. [17]. Indeed, an increased posture change is generally known as an indicator
of restlessness and discomfort and can be witnessed more often in calves experiencing
abdominal pain. It should be noted that some studies reported very different results in
calves with diarrhoea, including decreased lying time and bouts, and decreased lying time
but increased bouts [1719].
This inconsistency among studies can be explained by the different pathogenesis,
the severity of the outbreak, and the time of diagnosis [
20
]. For example, when systemic
symptoms appear (e.g., fever, depression, decreased appetite), calves’ lying time increases
significantly and they become lethargic [
21
,
22
]. In the present study, diarrhoea was diag-
nosed based on faeces observation and an elevated rectal temperature. Thus, the calves
in our study with diarrhoea may have experienced a more severe infection. Additionally,
our new-born calves could be more susceptible to severe disease responses compared with
older calves used in previous studies. In summary, the key to an effective monitoring
system for diarrhoea is to focus on the calf that is lying for an abnormal period of time, as
both an increase and decrease can indicate the onset of diarrhoea.
4.3. Limitations and Future Work
In this study, the observation time of calves was short, and the sample size was limited,
making it inadequate for time-series analysis of the disease process (non-occurrence, occur-
rence, progression). Our proposed vision-based, non-invasive approach has demonstrated
its effectiveness for practical applications in monitoring lying and standing behaviours,
without placing any burden on the calves. In the next step, we plan to apply it to conduct
larger-scale and longer-term observations and develop a predictive model for calf diarrhoea
based on the analysis of these observations. Farms only need to install the appropriate
cameras to monitor the standing and lying behaviour of calves, thereby improving calf
rearing. However, due to the generalisation issues of deep learning models, this method
cannot be directly applied to different farm animals. When directly transferred to different
farm settings, the accuracy decreases, necessitating incremental training with data from
new scenes.
In addition, faeces samples were not collected in this study to confirm the pathogenesis
of the diseases. Further investigation into the association between specific pathogens
and calf behaviour will contribute to the development of a more comprehensive health
monitoring system for calves.
5. Conclusions
In this study, we used six cameras from four angles to continuously record the daily
standing and lying behaviour of six calves born within 15 days on a dairy farm. We trained
a model based on the YOLOv8n CNN model to classify these behaviours. The feasibil-
ity of our model was tested by comparing the prediction with the ground-truth data of
8 calf-days, where the maximum difference is less than 14 min. Further, the model was
used to predict the daily standing and lying time data of six calves collected in a real
environment for a total of 17 calf-days. The predicted results demonstrated that healthy
Animals 2024,14, 1324 14 of 15
calves had about 2 h more daily standing time and 2 h less lying time than sick calves with
diarrhoea. This indicates that diarrhoea symptoms can alter the daily standing and lying
time of calves. This method demonstrates that using deep learning algorithms to establish
models can be used for monitoring the standing and lying behaviour of new-born calves.
Computer vision technology is gradually replacing traditional calf monitoring methods
due to its non-invasive monitoring characteristics. The methods discussed can assist farm
managers in monitoring calves. When abnormalities in standing and lying behaviours
occur, the system can promptly detect them and allow for focused attention. Systematic
monitoring can reduce calf mortality and production costs, as well as enhance calf health
and welfare. Additionally, it provides technical support for the health management of dairy
cows, optimisation of breeding, and the development of smart livestock farming.
Although our method has achieved good results, there are still some areas for im-
provement. This study only collected data from six calves for six days, which is insufficient
to explain the changes in calves’ daily standing and lying time, and cannot establish stan-
dards for calves’ daily standing and lying time. Future work will focus on collecting more
continuous data on calves to establish a standard for daily standing and lying time for
healthy calves, which can be used for early warning of calf abnormalities.
Supplementary Materials: The following supporting information can be downloaded at: https://
www.mdpi.com/article/10.3390/ani14091324/s1, Figure S1. Two types of cages used on the farm.
(a) type A is 3 m long, 2.5 m wide, and 3 m high. (b) type B is 2.5 m long, 2 m wide, and 2 m high. The
floors of both were bedded by a 15 cm thick layer of dry rice bran that was refreshed every 25 days;
Figure S2. Schematic of the video recording process; Figure S3. Data collection angle, the cameras are
recorded from the calf cage of the (a) top view in the middle; (b) top front; (c) front left; (d) front right;
Figure S4. Calf Farming Breeding Environment.
Author Contributions: Conceptualisation, W.Z. and W.W.; methodology, W.Z., G.F. and P.K.;
software, W.Z. and Y.W.; validation, W.Z.; formal analysis, Z.J.; investigation, W.Z. and Y.L.; re-
sources, L.G.; data curation, W.Z. and Y.L.; writing—original draft preparation, W.Z. and Y.W.; writi-
ng—review and editing, W.Z., G.F. and P.K.; visualisation, W.Z. and Y.W.; supervision, W.W.; project
administration, W.W. and L.G.; funding acquisition, W.W. and L.G. All authors have read and agreed
to the published version of the manuscript.
Funding: This research was funded by the Key Research and Development Program of Ningxia
Autonomous Region, grant number [2022BBF02021], the Major Science and Technology Program of
Inner Mongolia Autonomous Region, grant number [2020ZD0004], the National Key Research and
Development Program of China, grant number [2021YFD1300500], and the Science and Technology
Innovation Project of the Chinese Academy of Agricultural Sciences, grant number [CAAS-ASTIP-
2016-AII].
Institutional Review Board Statement: All protocols involving animals were approved by the
Experimental Animal Care and Use Committee of the Institute of Animal Sciences, Chinese Academy
of Agricultural Sciences, Beijing, China (approval number IAS2021-220).
Informed Consent Statement: Not applicable.
Data Availability Statement: The data presented in this study are available upon request from the
corresponding author.
Acknowledgments: We are grateful to Yinxiang Weiye Farm in Shandong Province in China for their
kind support in data collection.
Conflicts of Interest: The authors declare no conflicts of interest in this paper.
References
1.
Ruiz-Garcia, L.; Lunadei, L.; Barreiro, P.; Robla, I. A review of wireless sensor technologies and applications in agriculture and
food industry: State of the art and current trends. Sensors 2009,9, 4728–4750. [CrossRef] [PubMed]
2.
Wang, J.; He, Z.; Zheng, G.; Gao, S.; Zhao, K. Development and validation of an ensemble classifier for real-time recognition of
cow behavior patterns from accelerometer data and location data. PLoS ONE 2018,13, e0203546. [CrossRef] [PubMed]
Animals 2024,14, 1324 15 of 15
3.
Chapinal, N.; de Passille, A.M.; Rushen, J.; Wagner, S.A. Effect of analgesia during hoof trimming on gait, weight distribution,
and activity of dairy cattle. J. Dairy Sci. 2010,93, 3039–3046. [CrossRef] [PubMed]
4.
Jaeger, M.; Brügemann, K.; Brandt, H.; König, S. Associations between precision sensor data with productivity, health and welfare
indicator traits in native black and white dual-purpose cattle under grazing conditions. Appl. Anim. Behav. Sci. 2019,212, 9–18.
[CrossRef]
5.
Khanh, P.C.P.; Dinh Chinh, N.; Cham, T.T.; Vui, P.T.; Tan, T.D. Classification of cow behavior using 3-DOF accelerometer and
decision tree algorithm. In Proceedings of the BME-HUST 2016—3rd International Conference on Biomedical Engineering, Hanoi,
Vietnam, 5–6 October 2016; pp. 45–50.
6.
He, D.; Liu, D.; Zhao, K. Review of perceiving animal information and behavior in precision livestock farming. Trans. Chin. Soc.
Agric. Mach 2016,47, 231–244.
7.
Fuentes, A.; Yoon, S.; Park, J.; Park, D.S. Deep learning-based hierarchical cattle behavior recognition with spatio-temporal
information. Comput. Electron. Agric. 2020,177, 105627. [CrossRef]
8.
Wu, D.; Han, M.; Song, H.; Song, L.; Duan, Y. Monitoring the respiratory behavior of multiple cows based on computer vision
and deep learning. J. Dairy Sci. 2023,106, 2963–2979. [CrossRef] [PubMed]
9.
Shu, H.; Bindelle, J.; Gu, X. Non-contact respiration rate measurement of multiple cows in a free-stall barn using computer vision
methods. Comput. Electron. Agric. 2024,218, 108678. [CrossRef]
10.
Mar, C.C.; Zin, T.T.; Tin, P.; Honkawa, K.; Kobayashi, I.; Horii, Y. Cow detection and tracking system utilizing multi-feature
tracking algorithm. Sci. Rep. 2023,13, 17423. [CrossRef]
11.
Borchers, M.R.; Chang, Y.M.; Proudfoot, K.L.; Wadsworth, B.A.; Stone, A.E.; Bewley, J.M. Machine-learning-based calving
prediction from activity, lying, and ruminating behaviors in dairy cattle. J. Dairy Sci. 2017,100, 5664–5674. [CrossRef] [PubMed]
12.
Fregonesi, J.A.; Veira, D.M.; von Keyserlingk, M.A.G.; Weary, D.M. Effects of Bedding Quality on Lying Behavior of Dairy Cows.
J. Dairy Sci. 2007,90, 5468–5472. [CrossRef] [PubMed]
13.
Sumi, K.; Maw, S.Z.; Zin, T.T.; Tin, P.; Kobayashi, I.; Horii, Y. Activity-Integrated Hidden Markov Model to Predict Calving Time.
Animals 2021,11, 385. [CrossRef] [PubMed]
14.
Goharshahi, M.; Azizzadeh, M.; Lidauer, L.; Steininger, A.; Kickinger, F.; Öhlschuster, M.; Auer, W.; Klein-Jöbstl, D.; Drillich, M.;
Iwersen, M. Monitoring selected behaviors of calves by use of an ear-attached accelerometer for detecting early indicators of
diarrhea. J. Dairy Sci. 2021,104, 6013–6019. [CrossRef] [PubMed]
15.
Cheng, M.; Yuan, H.; Wang, Q.; Cai, Z.; Liu, Y.; Zhang, Y. Application of deep learning in sheep behaviors recognition and
influence analysis of training data characteristics on the recognition effect. Comput. Electron. Agric. 2022,198, 107010. [CrossRef]
16.
Shu, H.; Bindelle, J.; Guo, L.; Gu, X. Determining the onset of heat stress in a dairy herd based on automated behaviour recognition.
Biosyst. Eng. 2023,226, 238–251. [CrossRef]
17.
Swartz, T.H.; Schramm, H.H.; Petersson-Wolfe, C.S. Short Communication: Association between neonatal calf diarrhea and lying
behaviors. Vet. Anim. Sci. 2020,9, 100111. [CrossRef]
18.
Studds, M.; Deikun, L.; Sorter, D.; Pempek, J.; Proudfoot, K. The effect of diarrhea and navel inflammation on the lying behavior
of veal calves. J. Dairy Sci. 2018,101, 11251–11255. [CrossRef] [PubMed]
19.
Sutherland, M.; Lowe, G.; Huddart, F.; Waas, J.; Stewart, M. Measurement of dairy calf behavior prior to onset of clinical disease
and in response to disbudding using automated calf feeders and accelerometers. J. Dairy Sci. 2018,101, 8208–8216. [CrossRef]
[PubMed]
20.
Pereira, R.V.; Progar, A.L.A.; Moore, D.A. Dairy Calf Treatment for Diarrhea: Are the Drugs We Use Effective? Washington State
University Extension: Pullman, WA, USA, 2017; pp. 1–7.
21.
Borderas, T.F.; de Passille, A.M.; Rushen, J. Behavior of dairy calves after a low dose of bacterial endotoxin. J. Anim. Sci. 2008,86,
2920–2927. [CrossRef] [PubMed]
22.
Ollivett, T.; Leslie, K.; Nydam, D.; Duffield, T.; Zobel, G.; Hewson, J.; Kelton, D. The effect of respiratory disease on lying
behaviour in Holstein dairy calves. In Proceedings of the ADSA-ASAS-CSAS Joint Annual Meeting, Kansas City, MO, USA,
20–24 July 2014; FASS: Champaign, IL, USA, 2014; p. 34.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.
... Vázquez-Diosdado et al (2024) have also used an AdaBoost ensemble learning algorithm to classify calves' play and non-play behaviours and achieved an overall accuracy greater than 94%. Zhang et al (2024) have classified between healthy calves and sick calves (with diarrhoea) using YOLOv8n deep learning model. Features such as standing time, lying time, number of lying bouts, and average bout duration (using video feeds) have been used for the model training, and the model achieved a mean average precision of 0.995. ...
... This approach ensures that each dataset contains data from distinct, nonoverlapping calves). In contrast, some studies in the literature mix the data initially and use a simple split ratio to divide into training and testing sets (Carslake et al, 2020;White et al, 2008;Zhang et al, 2024). This approach will often result in data from the same subject appearing in both sets, compromising the assessment of generalization performance. ...
Preprint
Full-text available
In recent years, there has been considerable progress in research on human activity recognition using data from wearable sensors. This technology also has potential in the context of animal welfare in livestock science. In this paper, we report on research on animal activity recognition in support of welfare monitoring. The data comes from collar-mounted accelerometer sensors worn by Holstein and Jersey calves, the objective being to detect changes in behaviour indicating sickness or stress. A key requirement in detecting changes in behaviour is to be able to classify activities into classes, such as drinking, running or walking. In Machine Learning terms, this is a time-series classification task, and in recent years, the Rocket family of methods have emerged as the state-of-the-art in this area. We have over 27 hours of labelled time-series data from 30 calves for our analysis. Using this data as a baseline, we present Rocket's performance on a 6-class classification task. Then, we compare this against the performance of 11 Deep Learning (DL) methods that have been proposed as promising methods for time-series classification. Given the success of DL in related areas, it is reasonable to expect that these methods will perform well here as well. Surprisingly, despite taking care to ensure that the DL methods are configured correctly, none of them match Rocket's performance. A possible explanation for the impressive success of Rocket is that it has the data encoding benefits of DL models in a much simpler classification framework.
Article
Full-text available
En los últimos años, las tecnologías de visión por computadora han ganado popularidad en el ámbito agrícola; sin embargo, el enfoque en animales está en desarrollo debido a los desafíos que implica su integración. Esta investigación se realiza con el objetivo de estudiar el uso de tecnologías de visión por computadora para el manejo productivo y sanitario de animales de granja, para ello se utilizó la metodología “Systematic Literature Review” propuesta por Kitchenham & Charters, la cual consiste en realizar una búsqueda exhaustiva a través de bases de datos científicas, para este estudio se utilizó Scopus, IEEE Xplore, ScienceDirect y Google Scholar siguiendo criterios específicos de inclusión y exclusión. El proceso incluye la formulación de preguntas de investigación, la identificación de palabras claves relevantes y la evaluación de la calidad de los estudios seleccionados. A través de este proceso, se garantiza un análisis exhaustivo y riguroso de la literatura existente, logrando obtener resultados precisos y relevantes con la finalidad de estudiar diversos problemas con animales de granja tales como pollos, cerdos y ganado, que producen materia prima para el sustento y negocios de familias en sectores rurales. Tareas como la identificación, y rastreo de animales es posible gracias a herramientas como YOLOv, CNN, técnicas de segmentación, colorimetría, imágenes 2D, 3D, y cámaras que sirven como entrada de datos que se convierten en información procesada para el análisis y toma de decisiones inteligentes. Estas herramientas son útiles en prácticas con distintos propósitos que engloban el bienestar, la sanidad y la productividad de los animales. Además, esta investigación recomienda herramientas que podrían competir o superar las soluciones actuales, aportando a la ganadería de precisión en la mejora continua del uso tecnológico y su relación con el agricultor.
Article
Full-text available
In modern cattle farm management systems, video-based monitoring has become important in analyzing the high-level behavior of cattle for monitoring their health and predicting calving for providing timely assistance. Conventionally, sensors have been used for detecting and tracking their activities. As the body-attached sensors cause stress, video cameras can be used as an alternative. However, identifying and tracking individual cattle can be difficult, especially for black and brown varieties that are so similar in appearance. Therefore, we propose a new method of using video cameras for recognizing cattle and tracking their whereabouts. In our approach, we applied a combination of deep learning and image processing techniques to build a robust system. The proposed system processes images in separate stages, namely data pre-processing, cow detection, and cow tracking. Cow detection is performed using a popular instance segmentation network. In the cow tracking stage, for successively associating each cow with the corresponding one in the next frame, we employed the following three features: cow location, appearance features, as well as recent features of the cow region. In doing so, we simply exploited the distance between two gravity center locations of the cow regions. As color and texture suitably define the appearance of an object, we analyze the most appropriate color space to extract color moment features and use a Co-occurrence Matrix (CM) for textural representation. Deep features are extracted from recent cow images using a Convolutional Neural Network (CNN features) and are also jointly applied in the tracking process to boost system performance. We also proposed a robust Multiple Object Tracking (MOT) algorithm for cow tracking by employing multiple features from the cow region. The experimental results proved that our proposed system could handle the problems of MOT and produce reliable performance.
Article
Full-text available
Automatic respiration monitoring of dairy cows in modern farming not only helps to reduce manual labor but also increases the automation of health assessment. It is common for cows to congregate on farms, which poses a challenge for manual observation of cow status because they physically occlude each other. In this study, we propose a method that can monitor the respiratory behavior of multiple cows. Initially, 4,000 manually labeled images were used to fine-tune the YOLACT (You Only Look At CoefficienTs) model for recognition and segmentation of multiple cows. Respiratory behavior in the resting state could better reflect their health status. Then, the specific resting states (lying resting, standing resting) of different cows were identified by fusing the convolutional neural network and bidirectional long and short-term memory algorithms. Finally, the corresponding detection algorithms (lying and standing resting) were used for respiratory behavior monitoring. The test results of 60 videos containing different interference factors indicated that the accuracy of respiratory behavior monitoring of multiple cows in 54 videos was >90.00%, and that of 4 videos was 100.00%. The average accuracy of the proposed method was 93.56%, and the mean absolute error and root mean square error were 3.42 and 3.74, respectively. Furthermore, the effectiveness of the method was analyzed for simultaneous monitoring of respiratory behavior of multiple cows under movement, occlusion disturbance, and behavioral changes. It was feasible to monitor the respiratory behavior of multiple cows based on the proposed algorithm. This study could provide an a priori technical basis for respiratory behavior monitoring and automatic diagnosis of respiratory-related diseases of multiple dairy cows based on biomedical engineering technology. In addition, it may stimulate researchers to develop robots with health-sensing functions that are oriented toward precision livestock farming.
Article
Full-text available
Simple Summary Dairy cows are known to become more active during the time calving approaches. Dairy farms provide individual calving pens to monitor the behavior of pregnant cows. Frequent posture changes such as alternating between lying and standing are good indicators that calving is imminent. In this paper, we aimed to determine how using these behavior changes or activities could help predict calving time. The activity monitoring video cameras in this study were located at a top corner of the calving pens so that the whole pens are visible. By processing the collected video sequences, the activities of pregnant cows three days before the calving were modeled in a Hidden Markov Model to predict the time when the calving event occurs. The experimental results show that the proposed method has promise. Abstract Accurately predicting when calving will occur can provide great value in managing a dairy farm since it provides personnel with the ability to determine whether assistance is necessary. Not providing such assistance when necessary could prolong the calving process, negatively affecting the health of both mother cow and calf. Such prolongation could lead to multiple illnesses. Calving is one of the most critical situations for cows during the production cycle. A precise video-monitoring system for cows can provide early detection of difficulties or health problems, and facilitates timely and appropriate human intervention. In this paper, we propose an integrated approach for predicting when calving will occur by combining behavioral activities extracted from recorded video sequences with a Hidden Markov Model. Specifically, two sub-systems comprise our proposed system: (i) Behaviors extraction such as lying, standing, number of changing positions between lying down and standing up, and other significant activities, such as holding up the tail, and turning the head to the side; and, (ii) using an integrated Hidden Markov Model to predict when calving will occur. The experiments using our proposed system were conducted at a large dairy farm in Oita Prefecture in Japan. Experimental results show that the proposed method has promise in practical applications. In particular, we found that the high frequency of posture changes has played a central role in accurately predicting the time of calving.
Article
Full-text available
The objective of this study was to determine the association of neonatal calf diarrhea (NCD) with step activity and lying behaviors in pre-weaned dairy calves. Calves were housed in individual hutches for the first 6 days of life, and then moved into a group pen. On the day of birth, calves (n = 30) were fitted with an accelerometer, and step activity and lying behaviors were recorded. Calves were assigned a fecal score (FS) twice daily using a 0 to 3 scale, and were diagnosed with NCD (n = 10) when the score was a 3. To ensure the only association noted was due to NCD, calves that had any other health complications were excluded from analyses (n = 1). Calves with NCD were pair matched by age, breed, and birthdate to a healthy calf. Day 0 was designated as the date of NCD diagnosis. Calves with NCD spent less time lying (P < 0.05) and displayed more lying bouts (P < 0.05) of a shorter duration (P < 0.01) than healthy calves. Specifically, calves with NCD displayed more lying bouts on days -7 (P < 0.05), -6 (P < 0.01), -5 (P < 0.01), -4 (P < 0.01), and -3 (P < 0.05). Similarly, lying bout duration was shorter for calves with NCD on days -6 (P < 0.05), -5 (P < 0.05), -4 (P < 0.01), and -3 (P < 0.01). Additional research is needed to examine if these tools can be used to identify diseased calves prospectively.
Article
Full-text available
The aim of the study was to analyse the relationships between conventional production as well as subjectively scored welfare assessment traits with longitudinal sensor measurements (i.e., behavioural activity traits) of dual-purpose cattle kept in grazing systems. Such aim implies getting knowledge into dual-purpose cow grazing behaviour, and inferring behaviour trait thresholds from a management perspective (e.g., development of an early-warning system for clinical mastitis). In this regard, 49 native black and white dual-purpose cows from the breed Deutsches Schwarzbuntes Niederungsrind (DSN) were equipped with electronic sensor ear tags (accelerometer attached to radio frequency identification tag). Over a period of twelve months including two grazing seasons, ear sensors recorded individual cow behaviour for rumination, feeding, lying, active, and highly active, as well as ear surface temperature. Data for all traits were transformed into the measurement unit “percent per day”. Additionally, a trained classifier subjectively scored welfare assessment traits (body condition score, locomotion score, udder/ leg hygiene score) and temperament traits (intra-herd rank order, aggressiveness and general temperament during milking). A third trait category included official test-day records for milk-kg, fat-kg and somatic cell count. Association analyses focussed on trait correlations, and on mixed model applications (i.e. defining fixed effect classes for explanatory sensor traits). Sensor ear temperature was significantly negatively correlated with feeding behaviour (r=−0.17, P < 0.01), but significantly positively correlated with walking activity (r = 0.20, P < 0.001). Regarding subjectively cow welfare and temperament scoring, we found positive correlations between the level of aggressiveness towards other herd mates and the intra herd rank order (r = 0.36, P < 0.001), indicating that cows with a higher intra-herd rank showed increased aggressive behaviour. Mixed model analysis revealed that DSN cows spending more time lying down (>7 h/d) had reduced daily milk and fat yields. Oppositely, high yielding cows showed intensive feeding and rumination behaviour. A substantial decrease in rumination and feeding time was observed for cows with elevated somatic cells (>700,000 cells/mL), suggesting utilisation of sensor behaviour as an indicator for udder health. We identified significantly higher body condition scores for cows with increased lying times, whereas active cows with high daily feeding and rumination times had quite low body conditions scores. Hence, optimal health and behaviour monitoring for cows in grazing systems implies consideration of both trait components objectively recorded longitudinal sensor traits and subjectively scored behaviour and health indicators. Overall, individual behaviour pattern variation was detected, via sensor technology for dual-purpose cows kept in grazing systems. Hence, automatically recorded longitudinal sensor data is a proper alternative for cow phenotyping in extensive grassland systems, aiming on an accurate data basis for genetic evaluations.
Article
The behavior of animals can reflect animal health status and physiological stages. Automatic recognition of animal behavior can provide a powerful tool for improving the breeding management level and ensuring animal welfare. Although the image-based deep learning algorithms can be used to recognize animal behavior automatically, there has been no unified and clear conclusive definition of the characteristics and amount of training data of the deep learning model. To address this issue, this paper proposes a deep learning model based on the YOLO v5 network for sheep behavior recognition. The proposed model is trained using various types of datasets divided into two categories based on whether the training data have high similarity data characteristics with the test data. The model training included several rounds with different training data amounts. The experimental results show that if the training and testing data have the same characteristics, only 1,125 images per behavior type are required to achieve the recognition precision of 0.967 and recall of 0.965. However, when training and test data have different characteristics, it is challenging to achieve such high precision and recall values, even when using many datasets. These results demonstrate that in a structured scenario, when training data and data generated in the practical application have consistent characteristics, there is no need to use a large amount of training data. As a result, deep learning deployment efficiency in practical applications can be improved.
Article
One of the most important diseases in calves worldwide is neonatal calf diarrhea (NCD), which impairs calf welfare and leads to economic losses. The aim of this study was to test whether the activity patterns of calves can be used as early indicators to identify animals at risk for suffering from NCD, compared with physical examination. We monitored 310 healthy female Holstein-Friesian calves on a commercial dairy farm immediately after birth, equipped them with an ear tag-based accelerometer (Smartbow, Smartbow GmbH), and conducted daily physical examinations during the first 28 d of life. The Smartbow system captured acceleration data indicative of standing and lying periods and activity levels (active and inactive), shown as minutes per hour. We categorized calves as diarrheic if they showed fecal scores of ≥3 on a 4-point scale on at least 2 consecutive days. Incidence of diarrhea was 50.7% (n = 148). A mixed logistic regression model showed that lying [odds ratio (OR) = 1.19], inactive (OR = 1.14), and active (OR = 0.92) times, 1 d before clinical identification of diarrhea (d -1), were associated with the odds of diarrhea occurring on the subsequent day. Receiver operating characteristics curve showed that lying time at d -1 was a fair predictor for diarrhea on the subsequent day (area under curve = 0.69). Average lying time on d -1 was 64.8 min longer in diarrheic calves compared with their controls. Median lying and inactive times decreased, and active time increased with age over the study period. The 24-h pattern of behavior indices based on the output of the Smartbow system followed periods of resting and active times, and showed that between 2200 h and 0600 h, calves spent the greatest percentage of time lying and inactive. These results showed that the accelerometer system has the potential to detect early indicators associated with NCD. In future studies, additional data for the development and testing of calf- and event-specific algorithms (e.g., for detecting milk intake, playing behavior) should be collected, which might further improve the early detection of diarrhea in calves.
Article
Behavior is an important indicator for understanding the well-being of animals. This process has been frequently carried out by observing video records to detect changes with statistical analysis, or by using portable devices to monitor animal movements. However, regarding animal welfare, the use of such devices could affect the normal behavior of the animal, and present limitations in its applicability. This paper introduces an approach for hierarchical cattle behavior recognition with spatio-temporal information based on deep learning. Our research extends the idea of activity recognition in video and focuses specifically on cattle behavior. Our framework involves appearance features at frame-level and spatio-temporal information that incorporates more context-temporal features. The system can detect (class) and localize (bounding box) regions containing multiple cattle behaviors in the video frames. Additionally, we introduce our cattle behavior dataset that includes videos recorded with RGB cameras on different livestock farms during day and night environments. Experimental results show that our system can effectively recognize 15 different types of hierarchical activities divided into individual and group activities, and also part actions. Qualitative and quantitative evaluation evidence the performance of our framework as an effective method to monitor cattle behavior.