ArticlePDF Available

Harnessing Artificial Intelligence for Wildlife Conservation

Authors:

Abstract

The rapid decline in global biodiversity demands innovative conservation strategies. This paper examines the use of artificial intelligence (AI) in wildlife conservation, focusing on the Conservation AI platform. Leveraging machine learning and computer vision, Conservation AI detects and classifies animals, humans, and poaching-related objects using visual spectrum and thermal infrared cameras. The platform processes these data with convolutional neural networks (CNNs) and transformer architectures to monitor species, including those that are critically endangered. Real-time detection provides the immediate responses required for time-critical situations (e.g., poaching), while non-real-time analysis supports long-term wildlife monitoring and habitat health assessment. Case studies from Europe, North America, Africa, and Southeast Asia highlight the platform’s success in species identification, biodiversity monitoring, and poaching prevention. The paper also discusses challenges related to data quality, model accuracy, and logistical constraints while outlining future directions involving technological advancements, expansion into new geographical regions, and deeper collaboration with local communities and policymakers. Conservation AI represents a significant step forward in addressing the urgent challenges of wildlife conservation, offering a scalable and adaptable solution that can be implemented globally.
Citation: Fergus, P.; Chalmers, C.;
Longmore, S.; Wich, S. Harnessing
Artificial Intelligence for Wildlife
Conservation. Conservation 2024,4,
685–702. https://doi.org/10.3390/
conservation4040041
Academic Editor: Antoni Margalida
Received: 19 September 2024
Revised: 4 November 2024
Accepted: 5 November 2024
Published: 11 November 2024
Copyright: © 2024 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
Project Report
Harnessing Artificial Intelligence for Wildlife Conservation
Paul Fergus 1, * , Carl Chalmers 1, Steven Longmore 2and Serge Wich 3
1School of Computer Science and Mathematics, Liverpool John Moores Univeristy, Liverpool L3 3AF, UK;
c.chalmers@ljmu.ac.uk
2Astrophysics Research Institute, Liverpool John Moores University, Liverpool L3 5RF, UK;
s.n.longmore@ljmu.ac.uk
3School of Biological and Environmental Sciences, Liverpool John Moores University, Liverpol L3 3AF, UK;
s.wich@ljmu.ac.uk
*Correspondence: p.fergus@ljmu.ac.uk
Abstract: The rapid decline in global biodiversity demands innovative conservation strategies. This
paper examines the use of artificial intelligence (AI) in wildlife conservation, focusing on the Con-
servation AI platform. Leveraging machine learning and computer vision, Conservation AI detects
and classifies animals, humans, and poaching-related objects using visual spectrum and thermal
infrared cameras. The platform processes these data with convolutional neural networks (CNNs) and
transformer architectures to monitor species, including those that are critically endangered. Real-time
detection provides the immediate responses required for time-critical situations (e.g., poaching),
while non-real-time analysis supports long-term wildlife monitoring and habitat health assessment.
Case studies from Europe, North America, Africa, and Southeast Asia highlight the platform’s suc-
cess in species identification, biodiversity monitoring, and poaching prevention. The paper also
discusses challenges related to data quality, model accuracy, and logistical constraints while outlining
future directions involving technological advancements, expansion into new geographical regions,
and deeper collaboration with local communities and policymakers. Conservation AI represents
a significant step forward in addressing the urgent challenges of wildlife conservation, offering a
scalable and adaptable solution that can be implemented globally.
Keywords: artificial intelligence (AI); wildlife conservation; machine learning; species identification;
poaching prevention; biodiversity monitoring
1. Introduction
The rapid decline in global biodiversity presents a profound threat to ecosystems
and human well-being, underscoring the urgent need for more effective conservation
strategies [
1
]. Traditional methods, while valuable, often fall short of addressing the scale
and complexity of contemporary environmental challenges. Within this context, artificial
intelligence (AI) has emerged as a transformative tool, offering new avenues for enhancing
conservation efforts [
2
]. Conservation AI exemplifies this innovative application of AI
in wildlife conservation. By leveraging advanced machine learning and computer vision
technologies [
3
], Conservation AI seeks to detect and classify wildlife, monitor biodiversity,
and support anti-poaching and other illegal activities [4,5].
The platform, in collaboration with conservation partners around the world, employs
a combination of visual spectrum and thermal infrared cameras strategically deployed
on camera traps and drones to collect extensive data across various ecosystems. These
data are processed using state-of-the-art AI models, including convolutional neural net-
works (CNNs) [
6
] and transformer architectures [
7
], enabling the precise identification and
tracking of animal species, particularly those at risk of extinction. The dual capability of
real-time and non-real-time detection can potentially enhance the efficiency of conservation
efforts, providing immediate response options as well as long-term monitoring solutions.
Conservation 2024,4, 685–702. https://doi.org/10.3390/conservation4040041 https://www.mdpi.com/journal/conservation
Conservation 2024,4686
The integration of AI into conservation practices offers several distinct advantages [
8
].
Firstly, it facilitates continuous, non-invasive wildlife monitoring, thereby minimising
human disturbance in sensitive habitats. Secondly, AI-driven analytics can swiftly pro-
cess large datasets, uncovering patterns and trends that may elude human observation.
Thirdly, AI’s role in detecting poaching activities allows for rapid response and intervention,
potentially averting the illegal hunting of endangered species [9].
Conservation AI’s innovative approach is closely aligned with the needs of con-
servation organisations, research institutions, and local communities, ensuring that the
technology is both effective and contextually appropriate. The reason this is so well aligned
is that it has been developed in close collaboration with conservation groups. For instance,
in South America, Conservation AI collaborates with local researchers to monitor jaguar
populations and understand their habitat requirements. Similarly, in Africa, the platform
has been developed alongside partners to track the movements of critically endangered
species, such as pangolins and bongos in Uganda and Kenya, contributing valuable in-
sights to their preservation efforts [
10
]. These collaborations ensure that Conservation AI’s
technology is not only advanced but also context-specific.
Furthermore, Conservation AI is committed to continuous improvement and adapta-
tion. The platform regularly updates its AI models with new data and species information,
incorporating user feedback to enhance its performance. This iterative approach ensures
that Conservation AI remains at the forefront of technological advancements and is ca-
pable of addressing emerging conservation challenges. Additionally, the organisation
invests in training and capacity-building programmes, empowering local communities and
conservationists to effectively utilise AI tools.
This paper explores the methodologies employed by Conservation AI, its application
across diverse conservation projects, and the outcomes achieved thus far. Through the
examination of case studies and performance metrics, we aim to demonstrate the trans-
formative potential of AI in supporting wildlife conservation and addressing the critical
challenges faced by conservationists today.
2. Methodologies
The methodology employed by Conservation AI is meticulously designed to harness
advanced AI technologies for effective wildlife conservation. This section delineates the
systematic approach adopted by Conservation AI, encompassing the entire process from
data collection to the deployment of AI models for both real-time and non-real-time
detection and classification. The methodology is structured to ensure thorough data
acquisition, precise analysis, and the generation of actionable insights that directly inform
conservation efforts. By integrating state-of-the-art machine learning techniques with
practical field applications, Conservation AI seeks to significantly enhance the efficiency
and effectiveness of conservation initiatives.
A.
Data Collection
Conservation AI employs a comprehensive and methodologically rigorous data col-
lection strategy, utilising both camera traps and drones. By partnering with leading con-
servation organisations globally, including Chester Zoo, the Endangered Wildlife Trust,
and the Greater Mahale Ecosystem Research and Conservation team, Conservation AI is
able to amass a diverse array of datasets. This carefully crafted dataset creation process
is what distinguishes Conservation AI from other organisations, ensuring that the data
collected are not only extensive but also highly relevant and precise, thereby enhancing the
effectiveness of the AI models used in conservation efforts.
The collected data encompass both visual spectrum and thermal infrared imagery,
including still images and video recordings of wildlife in their natural habitats. Camera
trap data provide critical insights into species presence and behaviour within specific
locales, while drone-acquired data expand observational capabilities to cover larger and
more remote regions. This dual-method approach facilitates the capture of comprehensive
wildlife activity patterns, encompassing both diurnal and nocturnal behaviours.
Conservation 2024,4687
These meticulously curated datasets are utilised by Conservation AI to develop region-
specific models tailored to various regions, such as Sub-Saharan Africa, the Americas, Asia,
the U.K., and Europe. Specialists in data collection, filtering, and quality control manage
the data to optimise model performance. The deployment of a robust data processing
pipeline ensures the production of high-quality datasets, which are essential for training AI
models that are both accurate and reliable (repeatable).
B. AI Models
The foundation of Conservation AI’s technology lies in its sophisticated AI models,
designed to perform complex image recognition tasks with a high degree of accuracy.
Central to these models are CNNs [
6
] and transformer architectures [
7
], both of which have
proven exceptionally effective in the domain of computer vision. CNNs are particularly
well suited for processing grid-like data, such as images, excelling at identifying patterns,
edges, and textures. Conversely, transformer models are adept at capturing long-range
dependencies and contextual relationships within data, making them powerful tools for
interpreting more complex visual information.
Conservation AI’s models are developed using TensorFlow [
11
] and PyTorch [
12
],
two of the most widely used open-source machine learning frameworks developed by
Google and Meta, respectively. These frameworks provide robust and flexible platforms for
designing, training, and deploying machine learning models at scale. The training process is
both intensive and iterative, involving the ingestion of vast datasets comprising thousands
of labelled images. These images represent a wide variety of species, human figures,
and objects characteristic of typical animal behaviours, as well as potential indicators of
poaching activities.
During training, the AI models are exposed to these labelled datasets, allowing them to
progressively learn the distinguishing features and patterns associated with each category.
This process, known as supervised learning, enables the models to continually improve
their accuracy in detecting and classifying entities within new, unseen data. The models
not only learn to identify different species and human presence but also recognise specific
behaviours and anomalies that might suggest threats to wildlife, such as poaching or
habitat encroachment [13].
What sets Conservation AI apart from other approaches is its careful crafting and
continual refinement of these AI models. Techniques such as data augmentation [
14
],
transfer learning [
15
], and fine-tuning [
16
] are employed to maximise model effectiveness.
Data augmentation systematically varies training images—through rotations, translations,
and other transformations—to help the models generalise better to different scenarios.
Transfer learning allows the AI to leverage pre-trained models, adapting them to the
specific requirements of conservation tasks with reduced computational costs and time.
Fine-tuning further refines these models, ensuring they are sensitive to the nuances of local
ecosystems and species.
In addition to these technical advancements, Conservation AI continuously updates
and enhances its models by incorporating new data and feedback from conservation part-
ners. This iterative improvement process, which we term situated learning and precision
modelling, ensures that AI remains at the cutting edge of technology, providing reliable
and actionable insights that support global conservation efforts.
C.
Detection and Classification
Once trained, the AI models are deployed to process the data collected by the camera
traps and drones. The detection and classification process involves several key steps:
Conservation 2024,4688
1.
Preprocessing: Raw images and videos undergo preprocessing to enhance quality and
reduce noise. This step includes resizing, rescaling, normalisation, and augmentation
of the data;
2.
Inference: The pre-processed data are then fed into the AI models for inference. The
models analyse the images and videos to detect the presence of specific animal species,
humans, and objects;
3.
Classification: Detected entities are classified into predefined categories, such as
species type, human presence, and potential indicators of poaching (for example,
cars and people). The classification results are subsequently stored in a database for
further analysis.
Figures 14demonstrate the robust detection and classification capabilities of Con-
servation AI’s models across various challenging scenarios. Figure 1illustrates one of
our elephant detections using the Sub-Saharan Africa model. This detection was made
by a real-time camera installed in the Welgevonden Game Reserve in Limpopo Province,
South Africa.
Conservation 2024, 4, FOR PEER REVIEW 4
2. Inference: The pre-processed data are then fed into the AI models for inference. The
models analyse the images and videos to detect the presence of specic animal spe-
cies, humans, and objects;
3. Classication: Detected entities are classied into predened categories, such as spe-
cies type, human presence, and potential indicators of poaching (for example, cars
and people). The classication results are subsequently stored in a database for fur-
ther analysis.
Figures 14 demonstrate the robust detection and classication capabilities of Con-
servation AI’s models across various challenging scenarios. Figure 1 illustrates one of our
elephant detections using the Sub-Saharan Africa model. This detection was made by a
real-time camera installed in the Welgevonden Game Reserve in Limpopo Province, South
Africa.
Figure 1. African elephant detected by a real-time camera in Limpopo in South Africa.
Figure 2 highlights a particularly challenging detection, which is central to the mis-
sion of Conservation AI. Captured by a real-time camera, it shows a zebra at night, de-
tected from a considerable distance.
Figure 2. Zebra detected by a real-time camera in Welgevonden Game Reserve.
Figure 3 demonstrates the models’ prociency in handling occlusion and heavily
camouaged animals. In this example, a deer is partially concealed behind tree branches
in a forest, yet the model successfully identies the animal. Such capabilities are crucial
for comprehensive biodiversity assessments and conservation eorts. This image was pro-
cessed from data uploaded by a non-real-time camera.
Figure 1. African elephant detected by a real-time camera in Limpopo in South Africa.
Conservation 2024, 4, FOR PEER REVIEW 4
2. Inference: The pre-processed data are then fed into the AI models for inference. The
models analyse the images and videos to detect the presence of specic animal spe-
cies, humans, and objects;
3. Classication: Detected entities are classied into predened categories, such as spe-
cies type, human presence, and potential indicators of poaching (for example, cars
and people). The classication results are subsequently stored in a database for fur-
ther analysis.
Figures 14 demonstrate the robust detection and classication capabilities of Con-
servation AI’s models across various challenging scenarios. Figure 1 illustrates one of our
elephant detections using the Sub-Saharan Africa model. This detection was made by a
real-time camera installed in the Welgevonden Game Reserve in Limpopo Province, South
Africa.
Figure 1. African elephant detected by a real-time camera in Limpopo in South Africa.
Figure 2 highlights a particularly challenging detection, which is central to the mis-
sion of Conservation AI. Captured by a real-time camera, it shows a zebra at night, de-
tected from a considerable distance.
Figure 2. Zebra detected by a real-time camera in Welgevonden Game Reserve.
Figure 3 demonstrates the models’ prociency in handling occlusion and heavily
camouaged animals. In this example, a deer is partially concealed behind tree branches
in a forest, yet the model successfully identies the animal. Such capabilities are crucial
for comprehensive biodiversity assessments and conservation eorts. This image was pro-
cessed from data uploaded by a non-real-time camera.
Figure 2. Zebra detected by a real-time camera in Welgevonden Game Reserve.
Conservation 2024,4689
Conservation 2024, 4, FOR PEER REVIEW 5
Figure 3. A deer situated in a wooded forest captured using a traditional non-real-time camera trap.
Figure 4 presents an intricate case where our U.K. Mammals model not only detects
a squirrel but accurately distinguishes it as a grey squirrel. This detection is particularly
noteworthy given the challenging conditions: the animal is small, distant, and partially
obscured by a tree, with poor lighting in the early morning.
Figure 4. A grey squirrel detection in the early morning.
Images like these form the foundation of the datasets used by Conservation AI to
train our models. When models are designed for camera traps, we exclusively use data
from camera traps. The datasets reect region-specic variables such as seasonality, day
and night cycles, and varying weather conditions like rain and sunshine. Through the
process of situated learning and precision modelling, our models undergo continuous
ne-tuning, often taking up to a year to achieve the desired accuracy using data collected
from the cameras we deploy. This ongoing learning and adaptation process is a unique
feature of Conservation AI. Our commitment to developing long-term relationships with
users and partners ensures that the models achieve the level of accuracy necessary for
rigorous conservation studies.
D. Real-Time and Non-Real-Time Capabilities
Conservation AI oers both real-time and non-real-time detection capabilities, each
serving distinct but complementary roles in conservation eorts. Real-time detection is
critical for enabling immediate responses to poaching activities [17]. The AI models pro-
cess data on the y, triggering alerts to conservationists and authorities when suspicious
Figure 3. A deer situated in a wooded forest captured using a traditional non-real-time camera trap.
Conservation 2024, 4, FOR PEER REVIEW 5
Figure 3. A deer situated in a wooded forest captured using a traditional non-real-time camera trap.
Figure 4 presents an intricate case where our U.K. Mammals model not only detects
a squirrel but accurately distinguishes it as a grey squirrel. This detection is particularly
noteworthy given the challenging conditions: the animal is small, distant, and partially
obscured by a tree, with poor lighting in the early morning.
Figure 4. A grey squirrel detection in the early morning.
Images like these form the foundation of the datasets used by Conservation AI to
train our models. When models are designed for camera traps, we exclusively use data
from camera traps. The datasets reect region-specic variables such as seasonality, day
and night cycles, and varying weather conditions like rain and sunshine. Through the
process of situated learning and precision modelling, our models undergo continuous
ne-tuning, often taking up to a year to achieve the desired accuracy using data collected
from the cameras we deploy. This ongoing learning and adaptation process is a unique
feature of Conservation AI. Our commitment to developing long-term relationships with
users and partners ensures that the models achieve the level of accuracy necessary for
rigorous conservation studies.
D. Real-Time and Non-Real-Time Capabilities
Conservation AI oers both real-time and non-real-time detection capabilities, each
serving distinct but complementary roles in conservation eorts. Real-time detection is
critical for enabling immediate responses to poaching activities [17]. The AI models pro-
cess data on the y, triggering alerts to conservationists and authorities when suspicious
Figure 4. A grey squirrel detection in the early morning.
Figure 2highlights a particularly challenging detection, which is central to the mission
of Conservation AI. Captured by a real-time camera, it shows a zebra at night, detected
from a considerable distance.
Figure 3demonstrates the models’ proficiency in handling occlusion and heavily
camouflaged animals. In this example, a deer is partially concealed behind tree branches
in a forest, yet the model successfully identifies the animal. Such capabilities are crucial
for comprehensive biodiversity assessments and conservation efforts. This image was
processed from data uploaded by a non-real-time camera.
Figure 4presents an intricate case where our U.K. Mammals model not only detects
a squirrel but accurately distinguishes it as a grey squirrel. This detection is particularly
noteworthy given the challenging conditions: the animal is small, distant, and partially
obscured by a tree, with poor lighting in the early morning.
Conservation 2024,4690
Images like these form the foundation of the datasets used by Conservation AI to train
our models. When models are designed for camera traps, we exclusively use data from
camera traps. The datasets reflect region-specific variables such as seasonality, day and
night cycles, and varying weather conditions like rain and sunshine. Through the process
of situated learning and precision modelling, our models undergo continuous fine-tuning,
often taking up to a year to achieve the desired accuracy using data collected from the
cameras we deploy. This ongoing learning and adaptation process is a unique feature
of Conservation AI. Our commitment to developing long-term relationships with users
and partners ensures that the models achieve the level of accuracy necessary for rigorous
conservation studies.
D.
Real-Time and Non-Real-Time Capabilities
Conservation AI offers both real-time and non-real-time detection capabilities, each
serving distinct but complementary roles in conservation efforts. Real-time detection
is critical for enabling immediate responses to poaching activities [
17
]. The AI models
process data on the fly, triggering alerts to conservationists and authorities when suspicious
activities are identified. Figure 5illustrates a real-time Conservation AI camera deployment
in the U.K.
Non-real-time detection, on the other hand, involves batch processing of data collected
over a specific period [
18
]. This approach typically utilises traditional camera traps that
store data on SD cards, which must be retrieved at designated intervals for offline process-
ing. This method is particularly valuable for long-term monitoring and the comprehensive
analysis of wildlife populations and habitat health. By offering these dual capabilities,
Conservation AI ensures that both urgent, time-sensitive conservation needs and broader,
long-term ecological assessments are effectively addressed.
Conservation 2024, 4, FOR PEER REVIEW 6
activities are identied. Figure 5 illustrates a real-time Conservation AI camera deploy-
ment in the U.K.
Figure 5. Real-time camera trap deployed in the U.K.
Non-real-time detection, on the other hand, involves batch processing of data col-
lected over a specic period [18]. This approach typically utilises traditional camera traps
that store data on SD cards, which must be retrieved at designated intervals for oine
processing. This method is particularly valuable for long-term monitoring and the com-
prehensive analysis of wildlife populations and habitat health. By oering these dual ca-
pabilities, Conservation AI ensures that both urgent, time-sensitive conservation needs
and broader, long-term ecological assessments are eectively addressed.
E. Data Management and Analysis
The classied data are securely stored in an online database, accessible to conserva-
tionists and researchers for further analysis. Conservation AI oers a suite of tools in a
desktop client application, enabling conservationists to upload data, classify it, download
the results, and generate reports for statistical analyses (see Figure 6).
Figure 6. Conservation AI desktop application for data processing and oine analytics.
Figure 5. Real-time camera trap deployed in the U.K.
E. Data Management and Analysis
The classified data are securely stored in an online database, accessible to conserva-
tionists and researchers for further analysis. Conservation AI offers a suite of tools in a
Conservation 2024,4691
desktop client application, enabling conservationists to upload data, classify it, download
the results, and generate reports for statistical analyses (see Figure 6).
Conservation 2024, 4, FOR PEER REVIEW 6
activities are identied. Figure 5 illustrates a real-time Conservation AI camera deploy-
ment in the U.K.
Figure 5. Real-time camera trap deployed in the U.K.
Non-real-time detection, on the other hand, involves batch processing of data col-
lected over a specic period [18]. This approach typically utilises traditional camera traps
that store data on SD cards, which must be retrieved at designated intervals for oine
processing. This method is particularly valuable for long-term monitoring and the com-
prehensive analysis of wildlife populations and habitat health. By oering these dual ca-
pabilities, Conservation AI ensures that both urgent, time-sensitive conservation needs
and broader, long-term ecological assessments are eectively addressed.
E. Data Management and Analysis
The classied data are securely stored in an online database, accessible to conserva-
tionists and researchers for further analysis. Conservation AI oers a suite of tools in a
desktop client application, enabling conservationists to upload data, classify it, download
the results, and generate reports for statistical analyses (see Figure 6).
Figure 6. Conservation AI desktop application for data processing and oine analytics.
Figure 6. Conservation AI desktop application for data processing and offline analytics.
These tools are instrumental in understanding trends, identifying poaching hotspots,
and making informed decisions to shape conservation strategies. To ensure high per-
formance and reliability, we utilise 3PAR flash storage for our data processing needs.
Additionally, all data are backed up using a Synology NAS system, providing an extra
layer of security and redundancy to safeguard this valuable information [
19
]. To further
enhance our analytical capabilities, new reporting features are being developed based on
Large Language Models (LLMs), which will improve our ability to generate insightful
and comprehensive reports [
7
]. This robust data management and analysis infrastructure
distinguishes Conservation AI from other approaches, ensuring that our partners have
access to reliable, secure, and actionable information.
3. Applications and Case Studies
The practical applications of Conservation AI encompass a broad spectrum of conser-
vation activities, including species identification, biodiversity monitoring, and poaching
prevention. As of this writing, Conservation AI supports over 900 active projects globally
and processes more than 1.5 million images per week (and this is growing). By leverag-
ing AI-driven insights, Conservation AI has significantly contributed to the enhancement
and success of these diverse conservation initiatives worldwide. This section presents a
selection of notable case studies that demonstrate the platform’s effectiveness in real-world
scenarios. These examples underscore the wide-ranging ways in which Conservation AI is
utilised to address critical conservation challenges and protect endangered species, further
distinguishing it as a leader in the field of wildlife conservation.
A.
Species Identification
One of the primary applications of Conservation AI is the automatic identification of
wildlife species. By analysing images and videos captured by camera traps and drones,
the AI models can accurately identify a wide range of species, including many that are
threatened. This capability is particularly valuable in biodiversity hotspots where multiple
species coexist. For example, in projects conducted across Sub-Saharan Africa, Conserva-
tion AI has successfully identified over 30 different species, providing essential data for
conservationists to monitor and protect these animals [4]. This model has been effectively
Conservation 2024,4692
deployed in Uganda to monitor pangolins, in Kenya to track bongos, in the Maasai Mara to
observe elephants, and in South Africa to safeguard black rhinos. Readers have already
seen examples of this species identification capability in action, as illustrated in Figures 14,
where the AI successfully identified an African elephant, a zebra, a deer, and a grey squir-
rel under challenging conditions. These are mainly images captured from camera traps,
but Conservation AI also provides support for detecting animals using consumer drones
(Figure 7).
Conservation 2024, 4, FOR PEER REVIEW 7
These tools are instrumental in understanding trends, identifying poaching hotspots,
and making informed decisions to shape conservation strategies. To ensure high perfor-
mance and reliability, we utilise 3PAR ash storage for our data processing needs. Addi-
tionally, all data are backed up using a Synology NAS system, providing an extra layer of
security and redundancy to safeguard this valuable information [19]. To further enhance
our analytical capabilities, new reporting features are being developed based on Large
Language Models (LLMs), which will improve our ability to generate insightful and com-
prehensive reports [7]. This robust data management and analysis infrastructure distin-
guishes Conservation AI from other approaches, ensuring that our partners have access
to reliable, secure, and actionable information.
3. Applications and Case Studies
The practical applications of Conservation AI encompass a broad spectrum of con-
servation activities, including species identication, biodiversity monitoring, and poach-
ing prevention. As of this writing, Conservation AI supports over 900 active projects glob-
ally and processes more than 1.5 million images per week (and this is growing). By lever-
aging AI-driven insights, Conservation AI has signicantly contributed to the enhance-
ment and success of these diverse conservation initiatives worldwide. This section pre-
sents a selection of notable case studies that demonstrate the platform’s eectiveness in
real-world scenarios. These examples underscore the wide-ranging ways in which Con-
servation AI is utilised to address critical conservation challenges and protect endangered
species, further distinguishing it as a leader in the eld of wildlife conservation.
A. Species Identification
One of the primary applications of Conservation AI is the automatic identication of
wildlife species. By analysing images and videos captured by camera traps and drones,
the AI models can accurately identify a wide range of species, including many that are
threatened. This capability is particularly valuable in biodiversity hotspots where multi-
ple species coexist. For example, in projects conducted across Sub-Saharan Africa, Con-
servation AI has successfully identied over 30 dierent species, providing essential data
for conservationists to monitor and protect these animals [4]. This model has been eec-
tively deployed in Uganda to monitor pangolins, in Kenya to track bongos, in the Maasai
Mara to observe elephants, and in South Africa to safeguard black rhinos. Readers have
already seen examples of this species identication capability in action, as illustrated in
Figures 1 to 4, where the AI successfully identied an African elephant, a zebra, a deer,
and a grey squirrel under challenging conditions. These are mainly images captured from
camera traps, but Conservation AI also provides support for detecting animals using con-
sumer drones (Figure 7).
Figure 7. African elephant detected from a DJI Mavic 3 drone in Welgevonden Game Reserve in
South Africa.
These examples underscore the platform’s precision and reliability in species identi-
cation, making it a powerful tool for conservation eorts.
Figure 7. African elephant detected from a DJI Mavic 3 drone in Welgevonden Game Reserve in
South Africa.
These examples underscore the platform’s precision and reliability in species identifi-
cation, making it a powerful tool for conservation efforts.
B. Biodiversity Monitoring
Conservation AI plays a pivotal role in monitoring biodiversity and assessing habitat
health. By continuously collecting and analysing data, the platform enables conservationists
to gain a deeper understanding of wildlife population dynamics and their interactions
with the environment. For example, in a case study from Mexico, the South American
Mammals Model is being utilised to monitor and track jaguars, while in California, the
North American Mammals model is used to observe the movements and behaviours of
mountain lions. The data collected through these initiatives have provided critical insights
into migration patterns, feeding habits, and the impact of human–wildlife conflict. These
findings not only enhance our understanding of these species but also inform strategies for
mitigating threats and promoting coexistence between human and wildlife populations.
C.
Poaching Prevention
One of the most impactful applications of Conservation AI is to support anti-poaching
activities. The platform’s real-time detection capabilities facilitate rapid responses to illegal
hunting activities. In notable case studies from Uganda and the U.K., Conservation AI
successfully detected poaching activities involving pangolins and badgers, leading to
convictions and prison sentences. The AI models identified suspicious activities (the use
of spears and knives as well as the theft of real-time cameras that continue to transmit
images unbeknown to the perpetrator) within restricted areas, promptly sending alerts to
park rangers, who were able to intervene and notify law enforcement. These interventions
demonstrate the effectiveness of Conservation AI in supporting anti-poaching activities,
underscoring its critical role in protecting endangered species.
D.
Community Engagement
Engaging local communities is vital for the success of conservation projects. Conserva-
tion AI supports this engagement by providing accessible data and visualisations that can
be shared with community members. In a project in India, the platform was employed to
involve local communities in monitoring tiger populations. The data collected by Conser-
vation AI were shared with villagers, who were trained to use the platform and actively
Conservation 2024,4693
contribute to conservation efforts. This collaborative approach not only enhanced data
collection but also fostered a sense of ownership and responsibility among the community,
empowering them to take an active role in protecting their local wildlife.
4. Results and Discussion
The results from various projects using Conservation AI underscore the platform’s
effectiveness in advancing wildlife conservation efforts. Key performance metrics, such as
the number of files processed, observations recorded, and species detected, highlight its
impact. As of this writing, the platform has processed over 30 million images and identified
more than 9 million animals across 88 species (please visit www.conservationai.co.uk to
see updated stats). The platform’s accuracy in species identification remains consistently
high, achieving an average precision rate of 95%. These metrics illustrate the robustness
and reliability of Conservation AI in supporting conservation initiatives worldwide.
A.
Model Performance
The performance of the training results is evaluated using the mean average precision
(mAP) with an intersection over union (IoU) threshold of 0.5 [
20
]. mAP is a widely used
metric for assessing the effectiveness of object detection models, ranging from 0 to 1, with
higher values indicating superior performance. Specifically, mAP@0.5 refers to the model’s
predictions being evaluated at an IoU threshold of 0.5, a standard measure in the field.
The Sub-Saharan Africa Mammals model is one of our most utilised models. Figure 8
displays the precision–recall curve for the individual classes (29 classes) within this model.
The dark blue, thicker line represents the combined class, with an mAP@0.5 of 0.974. For
clarity, we have omitted the class legend due to the large number of classes, which would
make the colours and text difficult to discern.
An mAP@0.5 of 0.974 is considered excellent in the field of object detection. Achieving
this level of precision indicates that the model is highly accurate in detecting and classifying
objects, making it exceptionally reliable for use in conservation studies.
Figure 9presents the same metric for the U.K. Mammals model, which includes
26 classes. The thick blue line indicates that an mAP@0.5 of 0.976 was achieved. By em-
ploying the same rigorous data preprocessing, data variance, and model training pipeline
as used in the Sub-Saharan Africa Mammals model, we obtained similarly high results.
Conservation 2024, 4, FOR PEER REVIEW 9
0.974. For clarity, we have omied the class legend due to the large number of classes,
which would make the colours and text dicult to discern.
Figure 8. Precision–recall curve for the Sub-Saharan Africa Mammals model with an average of 0.974
mAP@0.5.
An mAP@0.5 of 0.974 is considered excellent in the eld of object detection. Achiev-
ing this level of precision indicates that the model is highly accurate in detecting and clas-
sifying objects, making it exceptionally reliable for use in conservation studies.
Figure 9 presents the same metric for the U.K. Mammals model, which includes 26
classes. The thick blue line indicates that an mAP@0.5 of 0.976 was achieved. By employing
the same rigorous data preprocessing, data variance, and model training pipeline as used
in the Sub-Saharan Africa Mammals model, we obtained similarly high results.
Figure 9. Precision–recall curve for the U.K. Mammals model with an average of 0.976 mAP@0.5.
Consistent with the results from our Sub-Saharan Africa Mammals model, an
mAP@0.5 of 0.976 is regarded as an excellent outcome. The North American Mammals
model, which includes 12 classes, similarly demonstrates strong performance with an
mAP@0.5 of 0.961, as shown in Figure 10. The Indomalayan Mammals model (10 classes)
produces comparable results with an mAP@0.5 of 0.977 (Figure 11), while the Central
Asian Mammals model (6 classes) achieves an even higher mAP@0.5 of 0.989 (Figure 12).
Figure 8. Precision–recall curve for the Sub-Saharan Africa Mammals model with an average of 0.974
mAP@0.5.
Conservation 2024,4694
Figure 9. Precision–recall curve for the U.K. Mammals model with an average of 0.976 mAP@0.5.
Consistent with the results from our Sub-Saharan Africa Mammals model, an mAP@0.5
of 0.976 is regarded as an excellent outcome. The North American Mammals model, which
includes 12 classes, similarly demonstrates strong performance with an mAP@0.5 of 0.961,
as shown in Figure 10. The Indomalayan Mammals model (10 classes) produces comparable
results with an mAP@0.5 of 0.977 (Figure 11), while the Central Asian Mammals model
(6 classes) achieves an even higher mAP@0.5 of 0.989 (Figure 12).
B. Results from Use Cases
Several papers have been published demonstrating the capabilities of Conservation
AI across various projects, with many more expected to follow. Appendix Aprovides a
collection of images generated by our models in some of those studies, offering readers
a glimpse into the wide-ranging applications of Conservation AI. These images illustrate
the challenging environments in which our models operate and highlight some of the
complex detections that can be difficult even for humans to classify. For more detailed
discussions on specific models and studies, readers are directed to the following collection
of papers [4,5,10,2128].
Conservation 2024, 4, FOR PEER REVIEW 10
Figure 10. Precision–recall curve for the North American Mammals model with an average of 0.961
mAP@0.5.
Figure 11. Precision–recall curve for the
Indomalayan Mammals
model with an average of 0.977
mAP@0.5.
Figure 10. Precision–recall curve for the North American Mammals model with an average of 0.961
mAP@0.5.
Conservation 2024,4695
Conservation 2024, 4, FOR PEER REVIEW 10
Figure 10. Precision–recall curve for the North American Mammals model with an average of 0.961
mAP@0.5.
Figure 11. Precision–recall curve for the
Indomalayan Mammals
model with an average of 0.977
mAP@0.5.
Figure 11. Precision–recall curve for the Indomalayan Mammals model with an average of 0.977
mAP@0.5.
Conservation 2024, 4, FOR PEER REVIEW 11
Figure 12. Precision–recall curve for the
Central Asian Mammals model
with an average 0.989
mAP@0.5.
B. Results from Use Cases
Several papers have been published demonstrating the capabilities of Conservation
AI across various projects, with many more expected to follow. Appendix A provides a
collection of images generated by our models in some of those studies, oering readers a
glimpse into the wide-ranging applications of Conservation AI. These images illustrate
the challenging environments in which our models operate and highlight some of the
complex detections that can be dicult even for humans to classify. For more detailed
discussions on specic models and studies, readers are directed to the following collection
of papers [4,5,10,21–28].
C. Challenges and Limitations
Despite its successes, Conservation AI faces several challenges and limitations. One
of the primary challenges lies in the quality of data collected. Camera traps, more so than
drones (although they have their own problems, i.e., distance and scale), often capture
low-quality images and videos under a wide range of adverse weather and environmental
conditions. Acquiring sucient region-specic data to eectively train our models—es-
pecially when dealing with rare or critically endangered animals—can be particularly dif-
cult. This limitation can aect the accuracy of the AI models and, in some cases, slow
down progress.
Even when data are available, the tagging process is time-consuming and crucial to
the overall pipeline (tagging is performed during the data preprocessing stage). If tagging
is not performed meticulously, it can signicantly degrade the quality of the models.
Maintaining high standards in data management is both time-consuming and costly, and
as a non-prot platform, ensuring the necessary data management capacity remains an
ongoing challenge.
Additionally, deploying and maintaining camera traps and drones in remote areas
presents logistical challenges and requires considerable resources. While transfer learning
helps mitigate some data collection issues and contributes to our strong results, it does
not eliminate the need to create high-quality, balanced datasets for model training. Fea-
tures present in the deployed environment may not be included in the initial base models,
necessitating the need to deploy hardware in model-specic regions to create custom da-
tasets. This can be likened to overing models to regions of intended use.
Another limitation is the need for continuous training and updating of AI models. As
new species are added and environmental conditions evolve, base models must be re-
trained from scratch to maintain their accuracy. It is not sucient to simply add new data
Figure 12. Precision–recall curve for the Central Asian Mammals model with an average 0.989
mAP@0.5.
C.
Challenges and Limitations
Despite its successes, Conservation AI faces several challenges and limitations. One
of the primary challenges lies in the quality of data collected. Camera traps, more so than
drones (although they have their own problems, i.e., distance and scale), often capture low-
quality images and videos under a wide range of adverse weather and environmental con-
ditions. Acquiring sufficient region-specific data to effectively train our models—especially
when dealing with rare or critically endangered animals—can be particularly difficult. This
limitation can affect the accuracy of the AI models and, in some cases, slow down progress.
Even when data are available, the tagging process is time-consuming and crucial to the
overall pipeline (tagging is performed during the data preprocessing stage). If tagging is not
performed meticulously, it can significantly degrade the quality of the models. Maintaining
high standards in data management is both time-consuming and costly, and as a non-profit
platform, ensuring the necessary data management capacity remains an ongoing challenge.
Additionally, deploying and maintaining camera traps and drones in remote areas
presents logistical challenges and requires considerable resources. While transfer learning
helps mitigate some data collection issues and contributes to our strong results, it does not
eliminate the need to create high-quality, balanced datasets for model training. Features
Conservation 2024,4696
present in the deployed environment may not be included in the initial base models,
necessitating the need to deploy hardware in model-specific regions to create custom
datasets. This can be likened to overfitting models to regions of intended use.
Another limitation is the need for continuous training and updating of AI models.
As new species are added and environmental conditions evolve, base models must be
retrained from scratch to maintain their accuracy. It is not sufficient to simply add new data
or classes to an existing model, as transfer learning requires fine-tuning earlier layers in the
network. Consequently, the entire dataset, including the new information, must be passed
through the training cycle again, which demands significant computational resources—an
expensive requirement over time.
Moreover, the deployment of AI technology in conservation efforts necessitates collab-
oration with local communities and authorities. This collaboration can be challenging due
to cultural and logistical barriers, as well as general mistrust in AI solutions.
D.
Discussion
The results from the models presented, along with the images collected from various
projects (as shown in Appendix A), demonstrate the significant potential of Conservation
AI to revolutionise wildlife conservation. The platform’s capacity to process vast amounts
of data rapidly—benchmarking at 100 million images processed in seven days using 8 RTX
A6000 GPUs loaded with 130 Faster RCNN deep learning vision models [
29
]—and with
high accuracy provides conservationists with critical support to conserve wildlife popula-
tions and their habitats. The real-time detection capabilities are particularly advantageous
for preventing poaching and protecting endangered species, highlighting Conservation
AI’s role in advancing the effectiveness of conservation strategies.
However, the challenges and limitations emphasise the need for ongoing improvement
and collaboration. Enhancing the quality of data collection, increasing the robustness of AI
models, and fostering stronger partnerships with local communities and authorities are
crucial for maximising the impact of Conservation AI.
5. Future Directions
The future of Conservation AI is characterised by continuous innovation and ex-
pansion. This section outlines the technological advancements, expansion plans, and
collaborative initiatives that will drive the platform’s development. By focusing on refining
AI models, extending geographical coverage, and strengthening partnerships with local
communities and policymakers, Conservation AI aims to amplify its impact on wildlife
conservation. Furthermore, ongoing research and development efforts will explore new
applications of AI while addressing ethical considerations, ensuring that the technology is
employed responsibly and effectively for the greater good of conservation.
A.
Technological Advances
A key area of focus is extending and enhancing the accuracy and robustness of AI
models. This will be accomplished by incorporating new data and emerging deep learning
models, particularly those with multimodal capabilities, such as vision transformers [
30
].
Vision transformers will enable more sophisticated and nuanced analysis by integrating
diverse data types, improving the platform’s ability to detect and classify wildlife and
poaching activities. Although audio modelling is already part of the Conservation AI
model ecosystem, fully integrating it with the visual components of the platform remains
an ongoing goal. This integration will further strengthen the platform’s capabilities in
providing comprehensive conservation solutions [24].
Another promising direction for Conservation AI is the development of edge AI
solutions. By deploying AI models directly on camera traps and drones, Conservation
AI can process data locally, reducing the reliance on constant internet connectivity and
enabling real-time decision-making even in remote areas. This approach also conserves
bandwidth and energy, making the system more sustainable and efficient.
B. Expansion Plans
Conservation 2024,4697
Conservation AI is committed to expanding its platform to support a broader range
of devices and regions. This expansion includes integrating with various types of camera
traps, drones, and other data collection tools. Current efforts are focused on abstracting the
sensor hardware to create a single interoperable solution. A key feature common to camera
traps and other types of edge devices is the presence of an SD card slot for data storage.
We intend to leverage this feature by using wireless Wi-Fi SD cards capable of transmitting
data to a base station, which can then relay it to our servers via wide area communications
(4/5G where available or STARLINK Mini where connectivity is limited) [
17
]. By extending
geographical coverage, Conservation AI aims to support conservation efforts across diverse
ecosystems, from tropical rainforests to arid deserts.
The platform also plans to collaborate more extensively with conservation organisa-
tions, research institutions, and governments. As our user base continues to expand, we
are committed to promoting widespread adoption in tandem with the platform’s growth.
These partnerships will be crucial in scaling up the deployment of Conservation AI and en-
suring that the technology is customised to meet the specific needs of diverse conservation
projects. Furthermore, expanding the user base will contribute to the collection of more
data, which in turn will enhance the accuracy and effectiveness of the AI models through
ongoing training activities.
C.
Community Engagement
Collaboration is essential to the success of Conservation AI. The platform is committed
to fostering partnerships with local communities, conservationists, and policymakers.
Engaging local communities in data collection and monitoring not only enhances the quality
of data but also ensures that conservation efforts are culturally sensitive and sustainable.
Training programmes and workshops can empower community members to utilise the
technology effectively and contribute actively to conservation efforts. The Conservation
AI team frequently travels to conservation sites to gain a deeper understanding of the
specific challenges of current conservation methods and determine how our platform can
support study pipelines. This hands-on approach has enabled us to develop long-term
relationships, which we intend to strengthen through increased outreach efforts.
Furthermore, collaboration with policymakers is crucial for creating supportive regu-
latory frameworks that facilitate the deployment of AI in conservation. This is especially
important as bio credits [
31
] emerge as a key tool in biodiversity monitoring and manage-
ment, alongside the adoption of more radical concepts such as interspecies money, where
animals own their own resources [
4
]. By working together, stakeholders can address legal
and ethical considerations, including data privacy and the impact of AI on local wildlife
and communities [32].
D.
Research and Development
Ongoing research and development are vital for the continuous enhancement of
Conservation AI. Future research will explore new applications of AI in conservation,
such as predicting the impact of climate change on wildlife populations and habitats.
This involves extending beyond classification to focus on predictive modelling, utilising
techniques similar to recurrent neural networks (RNNs) [
33
], long short-term memory
(LSTM) networks [34], and the more recent advancements in transformers [7].
Investing in research on the ethical implications of AI in conservation is equally im-
portant. This includes examining potential biases in AI models and ensuring that the
technology is employed responsibly and transparently, avoiding any negative impacts
on biodiversity. Common biases include data and sampling biases, where training data
may not comprehensively represent diverse species or regions, potentially skewing model
performance. Algorithmic and labelling biases can emerge from the design of models or
human subjectivity in data annotation, leading to uneven detection accuracy. Cultural and
temporal biases may shape conservation priorities or lead to outputs misaligned with cur-
rent environmental conditions. Presentation bias can influence stakeholders’ interpretation
of results, and technological bias can arise from limitations in hardware used for data col-
Conservation 2024,4698
lection. Addressing these biases involves ensuring diverse, representative data, continuous
model evaluation, regular updates, and transparent reporting to uphold responsible and
equitable AI practices in conservation efforts.
6. Conclusions
In this paper, we have explored the innovative application of artificial intelligence
in wildlife conservation through the lens of Conservation AI. By leveraging advanced AI
models and sophisticated data collection techniques, Conservation AI has demonstrated
significant potential in enhancing conservation efforts. The platform’s ability to accurately
detect and classify wildlife, monitor biodiversity, and prevent poaching activities offers
valuable tools for conservationists and researchers.
The case studies presented highlight the diverse applications of Conservation AI,
ranging from species identification to biodiversity monitoring and poaching prevention.
These examples underscore the platform’s effectiveness in addressing some of the most
pressing challenges in wildlife conservation. However, the success of Conservation AI also
reveals several challenges and limitations, including issues related to data quality, model
accuracy, and logistical constraints.
Looking ahead, the future of Conservation AI lies in ongoing technological advance-
ments, expansion plans, and collaborative initiatives. By further improving AI models,
expanding geographical coverage, and fostering partnerships with local communities and
policymakers, Conservation AI can amplify its impact on wildlife conservation. Contin-
ued research and development will be essential for exploring new applications of AI and
addressing ethical considerations.
In conclusion, Conservation AI represents a significant step towards harnessing the
power of artificial intelligence for the greater good of wildlife conservation. The integration
of AI into conservation efforts not only enhances the efficiency and accuracy of monitoring
and protection strategies but also opens new possibilities for understanding and preserving
our planet’s biodiversity. As we progress, it is crucial to continue investing in AI-driven
conservation technologies and fostering collaboration among stakeholders to ensure a
sustainable and thriving future for wildlife.
Author Contributions: Conceptualization, P.F., C.C., S.L. and S.W.; methodology, P.F. and C.C.;
software, P.F. and C.C.; validation, P.F., C.C., S.L. and S.W.; formal analysis, P.F. and C.C.; investigation,
P.F., C.C., S.L. and S.W.; writing—original draft preparation, P.F.; writing—review and editing, P.F.,
C.C., S.L. and S.W.; All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Data Availability Statement: The data used in Conservation AI is not publically available due to
ongoing sensitivities.
Acknowledgments: In the early days of Conservation AI, we owe a significant debt of gratitude
to Knowsley Safari, who provided invaluable support with data collection and model training by
allowing us to install real-time cameras across their animal paddocks. We also thank them for their
ongoing support, as our cameras continue to be deployed and collect data to this day. We would
also like to extend our gratitude to Welgevonden Game Reserve in Limpopo Provence in South
Africa, who installed over 25 of our real-time cameras and evaluated the Sub-Saharan Africa model
we developed with Knowsley Safari. We are also deeply indebted to Chester Zoo, a long-term
collaborator and active partner in the development of Conservation AI. Their contributions, including
extensive data provision and the facilitation of real-time camera installations in several African
countries for ongoing studies—such as pangolin monitoring in Uganda and bongo monitoring in
Kenya—have been instrumental in advancing our work. There are many more contributors who have
helped develop Conservation AI—too many to mention in this paper—and we thank you all. Lastly,
we would like to thank UKRI, particularly STFC and EPSRC, in the U.K., for providing valuable
funds to develop Conservation AI. We would also like to thank the U.S. Fish and Wildlife Service for
the funding and support they have provided us.
Conflicts of Interest: The authors declare no conflict of interest.
Conservation 2024,4699
Appendix A
Conservation 2024, 4, FOR PEER REVIEW 15
Appendix A
Figure A1. Cont.
Conservation 2024,4700
Conservation 2024, 4, FOR PEER REVIEW 16
Figure A1. Cont.
Conservation 2024,4701
Conservation 2024, 4, FOR PEER REVIEW 17
Figure A1. Collection of detections from several of our studies conducted in dierent geographical
locations globally.
References
1. Haines-Young, R.; Potschin, M. The links between biodiversity, ecosystem services and human well-being. Ecosyst. Ecol. New
Synth. 2010, 1, 110–139.
2. Russel, S.; Norvig, P. Articial Intelligence: A Modern Approach; Pearson: London, UK, 2016.
3. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of
the 2009 IEEE Conference on Computer Vision and Paern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255.
4. Fergus, P.; Chalmers, C.; Longmore, S.; Wich, S.; Warmenhove, C.; Swart, J.; Ngongwane, T.; Burger, A.; Ledgard, J.; Meijaard,
E. Empowering wildlife guardians: An equitable digital stewardship and reward system for biodiversity conservation using
deep learning and 3/4G camera traps. Remote Sens. 2023, 15, 2730.
5. Chalmers, C.; Fergus, P.; Wich, S.; Longmore, S.N.; Wal sh, N.D.; Stephens, P.A.; Sutherland, C.; Mahews, N.; Mudde, J.;
Nuseibeh, A. Removing human bolenecks in bird classication using camera trap images and deep learning. Remote Sens. 2023,
15, 2638.
6. LeCun, Y.; Boou, L.; Bengio, Y.; Haner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86,
2278–2324.
7. Vaswani , A. Aention is all you need. In Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS
2017), Long Beach, CA, USA, 4–9 December 2017.
8. Kwok, R. AI empowers conservation biology. Nature 2019, 567, 133–134.
9. Kamminga, J.; Ayele, E.; Meratnia, N.; Havinga, P. Poaching detection technologies—A survey. Sensors 2018, 18, 1474.
10. Lee, A. Conservation AI Makes Huge Leap Detecting Threats to Endangered Species Across the Globe. Available online:
hps://blogs.nvidia.com/blog/conservation-ai-detects-threats-to-endangered-species/ (accessed on 16 February 2023).
11. Abadi, M.; Barham, P.; Chen, J.; Chen, Z.; Davis, A.; Dean, J.; Devin, M.; Ghemawat, S.; Irving, G.; Isard, M.; et al. {TensorFlow}:
A system for {Large-Scale} machine learning. In Proceedings of the 12th USENIX Symposium on Operating Systems Design
and Implementation (OSDI 16), Savannah, GA, USA, 2–4 November 2016; pp. 265–283.
12. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. Pytorch:
An imperative style, high-performance deep learning library. Adv. Neural Inf. Process Syst. 2019, 32, 8026–8037.
13. Lindsey, P.A.; Balme, G.; Becker, M.; Begg, C.; Bento, C.; Bocchino, C.; Dickman, A.; Diggle, R.W.; Eves, H.; Henschel, P.; Lewis,
D. The bushmeat trade in African savannas: Impacts, drivers, and possible solutions. Biol. Conserv. 2013, 160, 80–96.
14. Shorten, C.; Khoshgoftaar, T.M. A survey on image data augmentation for deep learning. J. Big Data 2019, 6, 1–48.
15. Pan, S.J.; Yang, Q. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 2009, 22, 1345–1359.
16. Guo, Y.; Shi, H.; Kumar, A.; Grauman, K.; Rosing, T.; Feris, R. Spoune: Transfer learning through adaptive ne-tuning. In
Proceedings of the IEEE/CVF Conference on Computer Vision and Paern Recognition, Long Beach, CA, USA, 15–20 June 2019;
pp. 4805–4814.
Figure A1. Collection of detections from several of our studies conducted in different geographical
locations globally.
References
1.
Haines-Young, R.; Potschin, M. The links between biodiversity, ecosystem services and human well-being. Ecosyst. Ecol. New
Synth. 2010,1, 110–139.
2. Russel, S.; Norvig, P. Artificial Intelligence: A Modern Approach; Pearson: London, UK, 2016.
3.
Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the
2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255.
4.
Fergus, P.; Chalmers, C.; Longmore, S.; Wich, S.; Warmenhove, C.; Swart, J.; Ngongwane, T.; Burger, A.; Ledgard, J.; Meijaard, E.
Empowering wildlife guardians: An equitable digital stewardship and reward system for biodiversity conservation using deep
learning and 3/4G camera traps. Remote Sens. 2023,15, 2730. [CrossRef]
5.
Chalmers, C.; Fergus, P.; Wich, S.; Longmore, S.N.; Walsh, N.D.; Stephens, P.A.; Sutherland, C.; Matthews, N.; Mudde, J.; Nuseibeh,
A. Removing human bottlenecks in bird classification using camera trap images and deep learning. Remote Sens. 2023,15, 2638.
[CrossRef]
6.
LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998,86,
2278–2324. [CrossRef]
7.
Vaswani, A. Attention is all you need. In Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS
2017), Long Beach, CA, USA, 4–9 December 2017.
8. Kwok, R. AI empowers conservation biology. Nature 2019,567, 133–134. [CrossRef]
9.
Kamminga, J.; Ayele, E.; Meratnia, N.; Havinga, P. Poaching detection technologies—A survey. Sensors 2018,18, 1474. [CrossRef]
10.
Lee, A. Conservation AI Makes Huge Leap Detecting Threats to Endangered Species Across the Globe. Available online:
https://blogs.nvidia.com/blog/conservation-ai-detects-threats-to-endangered-species/ (accessed on 16 February 2023).
11.
Abadi, M.; Barham, P.; Chen, J.; Chen, Z.; Davis, A.; Dean, J.; Devin, M.; Ghemawat, S.; Irving, G.; Isard, M.; et al. {TensorFlow}: A
system for {Large-Scale} machine learning. In Proceedings of the 12th USENIX Symposium on Operating Systems Design and
Implementation (OSDI 16), Savannah, GA, USA, 2–4 November 2016; pp. 265–283.
12.
Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. Pytorch:
An imperative style, high-performance deep learning library. Adv. Neural Inf. Process Syst. 2019,32, 8026–8037.
13.
Lindsey, P.A.; Balme, G.; Becker, M.; Begg, C.; Bento, C.; Bocchino, C.; Dickman, A.; Diggle, R.W.; Eves, H.; Henschel, P.; et al. The
bushmeat trade in African savannas: Impacts, drivers, and possible solutions. Biol. Conserv. 2013,160, 80–96. [CrossRef]
14. Shorten, C.; Khoshgoftaar, T.M. A survey on image data augmentation for deep learning. J. Big Data 2019,6, 1–48. [CrossRef]
15. Pan, S.J.; Yang, Q. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 2009,22, 1345–1359. [CrossRef]
16.
Guo, Y.; Shi, H.; Kumar, A.; Grauman, K.; Rosing, T.; Feris, R. Spottune: Transfer learning through adaptive fine-tuning. In
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019;
pp. 4805–4814.
Conservation 2024,4702
17.
Whytock, R.C.; Suijten, T.; van Deursen, T.; ´
Swie ˙
zewski, J.; Mermiaghe, H.; Madamba, N.; Mouckoumou, N.; Zwerts, J.A.; Pambo,
A.F.K.; Bahaa-el-din, L.; et al. Real-time alerts from AI-enabled camera traps using the Iridium satellite network: A case-study in
Gabon, Central Africa. Methods Ecol. Evol. 2023,14, 867–874. [CrossRef]
18.
Trolliet, F.; Vermeulen, C.; Huynen, M.C.; Hambuckers, A. Use of camera traps for wildlife studies: A review. Biotechnol. Agron.
SociétéEnviron. 2014,18, 446–454.
19. Choi, J.H.; Park, J.; Lee, S. Reassembling linux-based hybrid raid. J. Forensic Sci. 2020,65, 966–973. [CrossRef] [PubMed]
20.
Padilla, R.; Netto, S.L.; Da Silva, E.A. A survey on performance metrics for object-detection algorithms. In Proceedings of the
2020 International Conference on Systems, Signals and Image Processing (IWSSIP), Niteroi, Brazil, 1–3 July 2020; pp. 237–242.
21.
Westworth, S.O.; Chalmers, C.; Fergus, P.; Longmore, S.N.; Piel, A.K.; Wich, S.A. Understanding external influences on target
detection and classification using camera trap images and machine learning. Sensors 2022,22, 5386. [CrossRef]
22.
Doull, K.E.; Chalmers, C.; Fergus, P.; Longmore, S.; Piel, A.K.; Wich, S.A. An evaluation of the factors affecting ‘poacher’detection
with drones and the efficacy of machine-learning for detection. Sensors 2021,21, 4074. [CrossRef] [PubMed]
23.
Chalmers, C.; Fergus, P.; Curbelo Montanez, C.A.; Longmore, S.N.; Wich, S.A. Video analysis for the detection of animals using
convolutional neural networks and consumer-grade drones. J. Unmanned Veh. Syst. 2021,9, 112–127. [CrossRef]
24.
Chalmers, C.; Fergus, P.; Wich, S.; Longmore, S.N. Modelling animal biodiversity using acoustic monitoring and deep learning. In
Proceedings of the 2021 International Joint Conference on Neural Networks (IJCNN), Shenzhen, China, 18–22 July 2021; pp. 1–7.
25.
Vélez, J.; McShea, W.; Shamon, H.; Castiblanco-Camacho, P.J.; Tabak, M.A.; Chalmers, C.; Fergus, P.; Fieberg, J. An evaluation of
platforms for processing camera-trap data using artificial intelligence. Methods Ecol. Evol. 2023,14, 459–477. [CrossRef]
26.
Vélez, J.; Castiblanco-Camacho, P.J.; Tabak, M.A.; Chalmers, C.; Fergus, P.; Fieberg, J. Choosing an Appropriate Platform and
Workflow for Processing Camera Trap Data using Artificial Intelligence. arXiv 2022, arXiv:2202.02283.
27.
Kissling, W.D.; Evans, J.C.; Zilber, R.; Breeze, T.D.; Shinneman, S.; Schneider, L.C.; Chalmers, C.; Fergus, P.; Wich, S.; Geelen, L.H.
Development of a cost-efficient automated wildlife camera network in a European Natura 2000 site. Basic Appl. Ecol. 2024,79,
141–152. [CrossRef]
28.
Piel, A.K.; Crunchant, A.; Knot, I.E.; Chalmers, C.; Fergus, P.; Mulero-Pázmány, M.; Wich, S.A. Noninvasive technologies for
primate conservation in the 21st century. Int. J. Primatol. 2022,43, 1–35. [CrossRef]
29.
Girshick, R. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December
2015; pp. 1440–1448.
30.
Khan, S.; Naseer, M.; Hayat, M.; Zamir, S.W.; Khan, F.S.; Shah, M. Transformers in vision: A survey. ACM Comput. Surv. (CSUR)
2022,54, 1–41. [CrossRef]
31.
Steele, P.; Porras, I. Making the Market Work for Nature: How Biocredits Can Protect Biodiversity and Reduce Poverty; IIED: London, UK,
2020.
32.
Nandutu, I.; Atemkeng, M.; Okouma, P. Integrating AI ethics in wildlife conservation AI systems in South Africa: A review,
challenges, and future research agenda. AI Soc. 2023,38, 1–13. [CrossRef]
33.
Sherstinsky, A. Fundamentals of recurrent neural network (RNN) and long short-term memory (LSTM) network. Phys. D
Nonlinear Phenom. 2020,404, 132306. [CrossRef]
34.
Zhao, Z.; Chen, W.; Wu, X.; Chen, P.C.; Liu, J. LSTM network: A deep learning approach for short-term traffic forecast. IET Intell.
Transp. Syst. 2017,11, 68–75. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.
... In this study, we present an automated object detection and classification pipeline capable of identifying curlews and curlew chicks ( Figure 1). Our model utilises a customtrained YOLOv10 Deep Learning (DL) model, which is integrated with the Conservation AI platform [26]. This model is capable of detecting and classifying curlews and their chicks received from real-time 3/4G-enabled camera traps. ...
... In this study, we present an automated object detection and classification pipelin capable of identifying curlews and curlew chicks ( Figure 1). Our model utilises a custom trained YOLOv10 Deep Learning (DL) model, which is integrated with the Conservatio AI platform [26]. This model is capable of detecting and classifying curlews and the chicks received from real-time 3/4G-enabled camera traps. ...
... The dataset used for modelling contains images of 26 familiaris), red deer (Cervus elaphus), European rabbit (Oryctolagus cuniculus), European badger (Meles meles), common buzzard (Buteo buteo), northern goshawk (Accipiter gentilis), domestic cat (Felis catus), eastern grey squirrel (Sciurus carolinensis), red squirrel (Sciurus vulgaris), European pine martin (Martes martes), common pheasant (Phasianus colchicus), house sparrow (Passer domesticus), domestic sheep (Ovis aries), common wood pigeon (Columba palumbus), common curlew (Numenius arquata), common curlew chick (Numenius arquata), domestic goat (Capra hircus) and calibration pole, which were obtained through various conservation partners and private camera deployments. The dataset used in this study comprised a total of 38,740 image files. ...
Article
Full-text available
Effective monitoring of wildlife is critical for assessing biodiversity and ecosystem health as declines in key species often signal significant environmental changes. Birds, particularly ground-nesting species, serve as important ecological indicators due to their sensitivity to environmental pressures. Camera traps have become indispensable tools for monitoring nesting bird populations, enabling data collection across diverse habitats. However, the manual processing and analysis of such data are resource-intensive, often delaying the delivery of actionable conservation insights. This study presents an AI-driven approach for real-time species detection, focusing on the curlew (Numenius arquata), a ground-nesting bird experiencing significant population declines. A custom-trained YOLOv10 model was developed to detect and classify curlews and their chicks using 3/4G-enabled cameras linked to the Conservation AI platform. The system processes camera trap data in real time, significantly enhancing monitoring efficiency. Across 11 nesting sites in Wales, the model achieved high performance, with a sensitivity of 90.56%, specificity of 100%, and F1-score of 95.05% for curlew detections and a sensitivity of 92.35%, specificity of 100%, and F1-score of 96.03% for curlew chick detections. These results demonstrate the capability of AI-driven monitoring systems to deliver accurate, timely data for biodiversity assessments, facilitating early conservation interventions and advancing the use of technology in ecological research.
... To address this issue, advances in object detection models [20], such as YOLO (You Only Look Once), have introduced improved capabilities for species-specific identification [21,22], making them increasingly applicable to camera trap data [8,23]. As detailed in this paper, YOLO is highly effective at detecting and classifying different species in challenging environments captured in low-quality images. ...
... Building on prior detection and contextual models [21,22], our approach generates structured, context-aware outputs that consolidate species-specific and environmental insights from camera trap data. While these outputs provide foundational information, such as species identification, estimated counts, and behavioural or habitat context, further analysis is still necessary to interpret these insights for specific stakeholder needs. ...
... By collaborating with global conservation organisations, Conservation AI [21] (an AI platform that provides species-specific object detection models for offline and online camera trap image processing) has compiled diverse camera trap datasets that represent a wide range of habitats. This diversity in species and environments ensures that the AI models developed are robust and adaptable across ecosystems, significantly enhancing their utility for global conservation efforts. ...
Article
Full-text available
Camera traps offer enormous new opportunities in ecological studies, but current automated image analysis methods often lack the contextual richness needed to support impactful conservation outcomes. Integrating vision–language models into these workflows could address this gap by providing enhanced contextual understanding and enabling advanced queries across temporal and spatial dimensions. Here, we present an integrated approach that combines deep learning-based vision and language models to improve ecological reporting using data from camera traps. We introduce a two-stage system: YOLOv10-X to localise and classify species (mammals and birds) within images and a Phi-3.5-vision-instruct model to read YOLOv10-X bounding box labels to identify species, overcoming its limitation with hard-to-classify objects in images. Additionally, Phi-3.5 detects broader variables, such as vegetation type and time of day, providing rich ecological and environmental context to YOLO’s species detection output. When combined, this output is processed by the model’s natural language system to answer complex queries, and retrieval-augmented generation (RAG) is employed to enrich responses with external information, like species weight and IUCN status (information that cannot be obtained through direct visual analysis). Combined, this information is used to automatically generate structured reports, providing biodiversity stakeholders with deeper insights into, for example, species abundance, distribution, animal behaviour, and habitat selection. Our approach delivers contextually rich narratives that aid in wildlife management decisions. By providing contextually rich insights, our approach not only reduces manual effort but also supports timely decision making in conservation, potentially shifting efforts from reactive to proactive.
... To address this issue, advances in object detection models [20], such as YOLO (You Only Look Once), have introduced improved capabilities for species-specific identification [21,22], making them increasingly applicable to camera trap data [8,23]. As detailed in this paper, YOLO is highly effective at detecting and classifying different species in challenging environments captured in low-quality images. ...
... Building on prior detection and contextual models [21,22], our approach generates structured, context-aware outputs that consolidate species-specific and environmental insights from camera trap data. While these outputs provide foundational information, such as species identification, estimated counts, and behavioural or habitat context, further analysis is still necessary to interpret these insights for specific stakeholder needs. ...
... By collaborating with global conservation organisations, Conservation AI [21] (an AI platform that provides species-specific object detection models for offline and online camera trap image processing) has compiled diverse camera trap datasets that represent a wide range of habitats. This diversity in species and environments ensures that the AI models developed are robust and adaptable across ecosystems, significantly enhancing their utility for global conservation efforts. ...
Preprint
Full-text available
Camera traps offer enormous new opportunities in ecological studies, but current automated image analysis methods often lack the contextual richness needed to support impactful conservation outcomes. Here we present an integrated approach that combines deep learning-based vision and language models to improve ecological reporting using data from camera traps. We introduce a two-stage system: YOLOv10-X to localise and classify species (mammals and birds) within images, and a Phi-3.5-vision-instruct model to read YOLOv10-X binding box labels to identify species, overcoming its limitation with hard to classify objects in images. Additionally, Phi-3.5 detects broader variables, such as vegetation type, and time of day, providing rich ecological and environmental context to YOLO’s species detection output. When combined, this output is processed by the model’s natural language system to answer complex queries, and retrieval-augmented generation (RAG) is employed to enrich responses with external information, like species weight and IUCN status (information that cannot be obtained through direct visual analysis). Combined, this information is used to automatically generate structured reports, providing biodiversity stakeholders with deeper insights into, for example, species abundance, distribution, animal behaviour, and habitat selection. Our approach delivers contextually rich narratives that aid in wildlife management decisions. By providing contextually rich insights, our approach not only reduces manual effort but also supports timely decision-making in conservation, potentially shifting efforts from reactive to proactive management.
... Camera traps have proven useful in wildlife conservation efforts, offering insights into animal population dynamics and habitat use across large geographical areas without the need for direct human observation [1], [2]. Through image and video data collection these motion-triggered devices provide novel, and at times inaccessible data by other means for biodiversity assessment and the Building on prior detection and contextual models [21], [22], our approach generates structured, context-aware outputs that consolidate species-specific and environmental insights from camera trap data. While these outputs provide foundational information, such as species identification, estimated counts, and behavioural or habitat context, further analysis is still necessary to interpret these insights for specific stakeholder needs. ...
... By collaborating with global conservation organisations, Conservation AI [21] has compiled diverse camera trap datasets that represent a wide range of habitats. This diversity in species and environments ensures that the AI models developed are robust and adaptable across ecosystems, significantly enhancing their utility for global conservation efforts. ...
Preprint
Full-text available
Camera traps offer enormous new opportunities in ecological studies, but current automated image analysis methods often lack the contextual richness needed to support impactful conservation outcomes. Here we present an integrated approach that combines deep learning-based vision and language models to improve ecological reporting using data from camera traps. We introduce a two-stage system: YOLOv10-X to localise and classify species (mammals and birds) within images, and a Phi-3.5-vision-instruct model to read YOLOv10-X binding box labels to identify species, overcoming its limitation with hard to classify objects in images. Additionally, Phi-3.5 detects broader variables, such as vegetation type, and time of day, providing rich ecological and environmental context to YOLO’s species detection output. When combined, this output is processed by the model’s natural language system to answer complex queries, and retrieval-augmented generation (RAG) is employed to enrich responses with external information, like species weight and IUCN status (information that cannot be obtained through direct visual analysis). Combined, this information is used to automatically generate structured reports, providing biodiversity stakeholders with deeper insights into, for example, species abundance, distribution, animal behaviour, and habitat selection. Our approach delivers contextually rich narratives that aid in wildlife management decisions. By providing contextually rich insights, our approach not only reduces manual effort but also supports timely decision-making in conservation, potentially shifting efforts from reactive to proactive management.
... Deep learning models such as YOLO [50] and Phi-3.5-vision-instruct achieve high accuracy in species detection [51,52], while AI-powered alert systems like TrailGuard detect poachers and habitat disturbances [53]. AIdriven bioacoustic monitoring enables non-invasive species tracking, with tools like Haikubox identifying bird species through their songs [54]. ...
Preprint
Full-text available
This paper presents outcomes from the inaugural "EcoHack: AI & LLM Hackathon for Applications in Evidence-based Ecological Research & Practice," which convened participants from across Europe and beyond, culminating in 11 team submissions. These submissions highlighted six broad application areas of AI for ecology: (1) AI-enhanced decision support and automation, (2) scientific search and communication, (3) knowledge extraction and reasoning, (4) AI for ecological modeling, forecasting, and simulation, (5) causal inference and ecological reasoning, and (6) AI for biodiversity monitoring and conservation. Each team’s project is summarized in a consolidated table—complete with links to source code—and described in brief papers in the appendix. Beyond summarizing technical results, this paper offers insights into the hackathon’s hybrid structure, featuring an in-person gathering in Bielefeld, Germany, alongside a global online hub that facilitated both local and virtual engagement. Throughout the event, participants showcased how large language models (LLMs) can serve as both robust tools for diverse machine learning tasks and flexible platforms for rapidly prototyping novel research applications. These efforts underscore the importance of stronger technological bridges among stakeholders in ecology, including practitioners, local farmers, and policymakers. Overall, the EcoHack outcomes highlight the transformative potential of AI in driving scientific discovery and fostering interdisciplinary collaboration in ecology.
... AI technologies have played a significant role in the documentation of the existing species as well as the monitoring of the same through the improvement of the identification processes. Currently, machine learning techniques, especially Convolutional Neural Networks (CNNs), are employed to process camera trap images and quickly and accurately classify species from massive data sets (Fergus et al., 2024). Also, computer vision methods assist in the analysis of visual data, enhancing the identification of cryptic species and minimizing the use of time-consuming methods (Pinho et al., 2022). ...
Article
India is a country with diverse ecosystems, high endemism and rich biodiversity, it becomes the need of the hour to deploy large scale implementation of the most sophisticated technologies to capture and monitor the trends in biological diversity research. Traditional methods should be applied alongside these new methods especially when dealing with large data sets or when there is a problem with logistics may possibly due to the topography of the country, with its challenging and often inaccessible terrains, makes it difficult to conduct thorough manual surveys both periodically and seasonally on various taxa. A literature review on how ‘Cyber taxonomy’ and ‘Artificial Intelligence’ can be utilized to improve the documentation of the species diversity in the biodiverse regions in India for the conservation and management. The current developments in the machine learning and computer vision, as being revolutionary in the assessment of species diversity, richness and possible future research directions are presented in this article.
... In this study, we present an automated object detection and classification pipeline capable of identifying curlews and curlew chicks (Figure 1). Our model utilises a custom trained YOLOv10 Deep Learning (DL) model, which is integrated with the Conservation AI platform [26]. This model is capable of detecting and classifying curlews and their chicks received from real-time 3/4G-enables camera traps. ...
Preprint
Full-text available
Effective monitoring of wildlife is critical for assessing biodiversity and ecosystem health, as declines in key species often signal significant environmental changes. Birds, particularly ground-nesting species, serve as important ecological indicators due to their sensitivity to environmental pressures. Camera traps have become indispensable tools for monitoring nesting bird populations, enabling data collection across diverse habitats. However, the manual processing and analysis of such data are resource-intensive, often delaying the delivery of actionable conservation insights. This study presents an AI-driven approach for real-time species detection, focusing on the curlew (Numenius arquata), a ground-nesting bird experiencing significant population declines. A custom-trained YOLOv10 model was developed to detect and classify curlews and their chicks using 3/4G-enabled cameras linked to the Conservation AI platform. The system processes camera trap data in real-time, significantly enhancing monitoring efficiency. Across 11 nesting sites in Wales, the model achieved high performance, with a sensitivity of 90.56%, specificity of 100%, and F1-score of 95.05% for curlew detections, and a sensitivity of 92.35%, specificity of 100%, and F1-score of 96.03% for curlew chick detections. These results demonstrate the capability of AI-driven monitoring systems to deliver accurate, timely data for biodiversity assessments, facilitating early conservation interventions and advancing the use of technology in ecological research.
Article
Elephant detection is essential for protecting these endangered animals. This paper explores how AI, especially machine learning and deep learning, can help us to spot elephants. We inspect how well different algorithms like convolutional neural networks (CNNs) and support vector machines (SVMs) work in identifying elephants in various environments. By analysing lots of images and videos, we evaluate how accurately and reliably these models perform. Our results show how the AI methods significantly improve detection rates compared to older techniques. We also discuss how important is the data augmentation, transfer learning, and the combining of multiple methods to improve performance. The approaches driven by AI has the ability to process the large amount of data in a short duration by making them ideal for large-scale conservation efforts. This study highlights the AI’s potential in advancement of wildlife conservation. AI usage can lead us to more effective strategies for preserving these magnificent creatures. AI helps us in identifying and understand the behaviour andmovement patterns of the elephants. AI analyses data from the source, like as video traps, thatpropose important information on their migration paths and their habitats. This Reduces the human-wildlife conflicts and creates focused conservation initiatives depend on this knowledge. Because AI can detect doubtful activity and notify the authorities instantly, it can also help with prohibition attempts. To ensure elephant’s survival and their well-being for future generations, integrating AI technology into wildlife conservation is a promising step forward. Key Words: Elephant Detection, Wildlife Conservation, Artificial Intelligence (AI),Machine Learning (ML), Deep Learning, Convolution Neural Networks (CNN), Support Vector Machines (SVM), Image Recognition, Transfer Learning, Data Augmentation.
Article
Full-text available
New agricultural technologies have increased agricultural capacity and efficiency. Artificial intelligence-basedmethodologies have been widely adopted in agricultural systems, resulting in successes in smart crop manage-ment, smart plant breeding, smart livestock farming, precision aquaculture farming, and agricultural robotics.However, machine learning models are limited by their dependence on huge, expensive labelled datasets fortraining, specialized skills for development and maintenance, and task-specificity, limiting their generalizability.Nevertheless, sustainable agriculture solutions are essential as climate change and population growth increase.Smart agriculture, which uses technology and data analytics, can improve social, economic, and environmentalsustainability. This paper presents a comprehensive investigation of smart technologies in agriculture, theirimpact on sustainability, architectural design, and major variables influencing their adoption based on recentresearch. We critically analyze smart agriculture to show how it might reduce environmental impact, boosteconomic growth, and promote social inclusivity. This research highlights the importance of artificial intelli-gence in agriculture and its domains
Chapter
Artificial Intelligence (AI) has become a transformative tool in wildlife conservation, enabling advanced data analysis, species monitoring, and habitat management. However, its application faces significant challenges. This chapter examines the technical, ethical, and practical obstacles associated with AI in conservation. Technically, AI's success depends on the quality and quantity of available data, and biases in data collection can result in flawed models and predictions. Ethically, concerns arise over the use of surveillance technologies that could infringe on the rights of indigenous communities, and the broader implications of AI-driven decisions on biodiversity management. Additionally, practical challenges such as inadequate infrastructure in remote areas, limited funding, and the need for greater interdisciplinary collaboration are explored. Through a critical analysis, the chapter aims to provide a balanced perspective on AI's role in wildlife conservation, advocating for a thoughtful and innovative approach to maximize its potential while addressing its limitations.
Article
Full-text available
Development of a cost-efficient automated wildlife camera network in the Amsterdam Dunes , a European Natura 2000 site. This approach provides opportunities for studying the distribution, habitat use, activity, phenology, population structure and community composition of wildlife species and allows comparison of traditional with the novel monitoring approach.
Article
Full-text available
The biodiversity of our planet is under threat, with approximately one million species expected to become extinct within decades. The reason: negative human actions, which include hunting, overfishing, pollution, and the conversion of land for urbanisation and agricultural purposes. Despite significant investment from charities and governments for activities that benefit nature, global wildlife populations continue to decline. Local wildlife guardians have historically played a critical role in global conservation efforts and have shown their ability to achieve sustainability at various levels. In 2021, COP26 recognised their contributions and pledged USD 1.7 billion per year; however this is a fraction of the global biodiversity budget available (between USD 124 billion and USD 143 billion annually) given they protect 80% of the planets biodiversity. This paper proposes a radical new solution based on “Interspecies Money”, where animals own their own money. Creating a digital twin for each species allows animals to dispense funds to their guardians for the services they provide. For example, a rhinoceros may release a payment to its guardian each time it is detected in a camera trap as long as it remains alive and well. To test the efficacy of this approach, 27 camera traps were deployed over a 400 km2 area in Welgevonden Game Reserve in Limpopo Province in South Africa. The motion-triggered camera traps were operational for ten months and, using deep learning, we managed to capture images of 12 distinct animal species. For each species, a makeshift bank account was set up and credited with GBP 100. Each time an animal was captured in a camera and successfully classified, 1 penny (an arbitrary amount—mechanisms still need to be developed to determine the real value of species) was transferred from the animal account to its associated guardian. The trial demonstrated that it is possible to achieve high animal detection accuracy across the 12 species with a sensitivity of 96.38%, specificity of 99.62%, precision of 87.14%, F1 score of 90.33%, and an accuracy of 99.31%. The successful detections facilitated the transfer of GBP 185.20 between animals and their associated guardians.
Article
Full-text available
Birds are important indicators for monitoring both biodiversity and habitat health; they also play a crucial role in ecosystem management. Declines in bird populations can result in reduced ecosystem services, including seed dispersal, pollination and pest control. Accurate and long-term monitoring of birds to identify species of concern while measuring the success of conservation interventions is essential for ecologists. However, monitoring is time-consuming, costly and often difficult to manage over long durations and at meaningfully large spatial scales. Technology such as camera traps, acoustic monitors and drones provide methods for non-invasive monitoring. There are two main problems with using camera traps for monitoring: (a) cameras generate many images, making it difficult to process and analyse the data in a timely manner; and (b) the high proportion of false positives hinders the processing and analysis for reporting. In this paper, we outline an approach for overcoming these issues by utilising deep learning for real-time classification of bird species and automated removal of false positives in camera trap data. Images are classified in real-time using a Faster-RCNN architecture. Images are transmitted over 3/4G cameras and processed using Graphical Processing Units (GPUs) to provide conservationists with key detection metrics, thereby removing the requirement for manual observations. Our models achieved an average sensitivity of 88.79%, a specificity of 98.16% and accuracy of 96.71%. This demonstrates the effectiveness of using deep learning for automatic bird monitoring.
Article
Full-text available
Efforts to preserve, protect and restore ecosystems are hindered by long delays between data collection and analysis. Threats to ecosystems can go undetected for years or decades as a result. Real‐time data can help solve this issue but significant technical barriers exist. For example, automated camera traps are widely used for ecosystem monitoring but it is challenging to transmit images for real‐time analysis where there is no reliable cellular or WiFi connectivity. We modified an off‐the‐shelf camera trap (Bushnell™) and customised existing open‐source hardware to create a ‘smart’ camera trap system. Images captured by the camera trap are instantly labelled by an artificial intelligence model and an ‘alert’ containing the image label and other metadata is then delivered to the end‐user within minutes over the Iridium satellite network. We present results from testing in the Netherlands, Europe, and from a pilot test in a closed‐canopy forest in Gabon, Central Africa. All reference materials required to build the system are provided in open‐source repositories. Results show the system can operate for a minimum of 3 months without intervention when capturing a median of 17.23 images per day. The median time‐difference between image capture and receiving an alert was 7.35 min, though some outliers showed delays of 5‐days or more when the system was incorrectly positioned and unable to connect to the Iridium network. We anticipate significant developments in this field and hope that the solutions presented here, and the lessons learned, can be used to inform future advances. New artificial intelligence models and the addition of other sensors such as microphones will expand the system's potential for other, real‐time use cases including real‐time biodiversity monitoring, wild resource management and detecting illegal human activities in protected areas.
Article
Full-text available
Camera traps have quickly transformed the way in which many ecologists study the distribution of wildlife species, their activity patterns and interactions among members of the same ecological community. Although they provide a cost‐effective method for monitoring multiple species over large spatial and temporal scales, the time required to process the data can limit the efficiency of camera‐trap surveys. Thus, there has been considerable attention given to the use of artificial intelligence (AI), specifically deep learning, to help process camera‐trap data. Using deep learning for these applications involves training algorithms, such as convolutional neural networks (CNNs), to use particular features in the camera‐trap images to automatically detect objects (e.g. animals, humans, vehicles) and to classify species. To help overcome the technical challenges associated with training CNNs, several research communities have recently developed platforms that incorporate deep learning in easy‐to‐use interfaces. We review key characteristics of four AI platforms—Conservation AI, MegaDetector, MLWIC2: Machine Learning for Wildlife Image Classification and Wildlife Insights—and two auxiliary platforms—Camelot and Timelapse—that incorporate AI output for processing camera‐trap data. We compare their software and programming requirements, AI features, data management tools and output format. We also provide R code and data from our own work to demonstrate how users can evaluate model performance. We found that species classifications from Conservation AI, MLWIC2 and Wildlife Insights generally had low to moderate recall. Yet, the precision for some species and higher taxonomic groups was high, and MegaDetector and MLWIC2 had high precision and recall when classifying images as either ‘blank’ or ‘animal’. These results suggest that most users will need to review AI predictions, but that AI platforms can improve efficiency of camera‐trap‐data processing by allowing users to filter their dataset into subsets (e.g. of certain taxonomic groups or blanks) that can be verified using bulk actions. By reviewing features of popular AI‐powered platforms and sharing an open‐source GitBook that illustrates how to manage AI output to evaluate model performance, we hope to facilitate ecologists' use of AI to process camera‐trap data.
Article
Full-text available
Using machine learning (ML) to automate camera trap (CT) image processing is advantageous for time-sensitive applications. However, little is currently known about the factors influencing such processing. Here, we evaluate the influence of occlusion, distance, vegetation type, size class, height, subject orientation towards the CT, species, time-of-day, colour, and analyst performance on wildlife/human detection and classification in CT images from western Tanzania. Additionally, we compared the detection and classification performance of analyst and ML approaches. We obtained wildlife data through pre-existing CT images and human data using voluntary participants for CT experiments. We evaluated the analyst and ML approaches at the detection and classification level. Factors such as distance and occlusion, coupled with increased vegetation density, present the most significant effect on DP and CC. Overall, the results indicate a significantly higher detection probability (DP), 81.1%, and correct classification (CC) of 76.6% for the analyst approach when compared to ML which detected 41.1% and classified 47.5% of wildlife within CT images. However, both methods presented similar probabilities for daylight CT images, 69.4% (ML) and 71.8% (analysts), and dusk CT images, 17.6% (ML) and 16.2% (analysts), when detecting humans. Given that users carefully follow provided recommendations, we expect DP and CC to increase. In turn, the ML approach to CT image processing would be an excellent provision to support time-sensitive threat monitoring for biodiversity conservation.
Article
Full-text available
Observing and quantifying primate behavior in the wild is challenging. Human presence affects primate behavior and habituation of new, especially terrestrial, individuals is a time-intensive process that carries with it ethical and health concerns, especially during the recent pandemic when primates are at even greater risk than usual. As a result, wildlife researchers, including primatologists, have increasingly turned to new technologies to answer questions and provide important data related to primate conservation. Tools and methods should be chosen carefully to maximize and improve the data that will be used to answer the research questions. We review here the role of four indirect methods—camera traps, acoustic monitoring, drones, and portable field labs—and improvements in machine learning that offer rapid, reliable means of combing through large datasets that these methods generate. We describe key applications and limitations of each tool in primate conservation, and where we anticipate primate conservation technology moving forward in the coming years.
Article
Full-text available
With the increased use of Artificial Intelligence (AI) in wildlife conservation, issues around whether AI-based monitoring tools in wildlife conservation comply with standards regarding AI Ethics are on the rise. This review aims to summarise current debates and identify gaps as well as suggest future research by investigating (1) current AI Ethics and AI Ethics issues in wildlife conservation, (2) Initiatives Stakeholders in AI for wildlife conservation should consider integrating AI Ethics in wildlife conservation. We find that the existing literature weakly focuses on AI Ethics and AI Ethics in wildlife conservation while at the same time ignores AI Ethics integration in AI systems for wildlife conservation. This paper formulates an ethically aligned AI system framework and discusses pre-eminent on-demand AI systems in wildlife conservation. The proposed framework uses agile software life cycle methodology to implement guidelines towards the ethical upgrade of any existing AI system or the development of any new ethically aligned AI system. The guidelines enforce, among others, the minimisation of intentional harm and bias, diversity in data collection, design compliance, auditing of all activities in the framework and ease of code inspection. This framework will inform AI developers, users, conservationists, and policymakers on what to consider when integrating AI Ethics into AI-based systems for wildlife conservation.
Article
Astounding results from Transformer models on natural language tasks have intrigued the vision community to study their application to computer vision problems. Among their salient benefits, Transformers enable modeling long dependencies between input sequence elements and support parallel processing of sequence as compared to recurrent networks e.g. , Long short-term memory (LSTM). Different from convolutional networks, Transformers require minimal inductive biases for their design and are naturally suited as set-functions. Furthermore, the straightforward design of Transformers allows processing multiple modalities ( e.g. , images, videos, text and speech) using similar processing blocks and demonstrates excellent scalability to very large capacity networks and huge datasets. These strengths have led to exciting progress on a number of vision tasks using Transformer networks. This survey aims to provide a comprehensive overview of the Transformer models in the computer vision discipline. We start with an introduction to fundamental concepts behind the success of Transformers i.e., self-attention, large-scale pre-training, and bidirectional feature encoding. We then cover extensive applications of transformers in vision including popular recognition tasks ( e.g. , image classification, object detection, action recognition, and segmentation), generative modeling, multi-modal tasks ( e.g. , visual-question answering, visual reasoning, and visual grounding), video processing ( e.g. , activity recognition, video forecasting), low-level vision ( e.g. , image super-resolution, image enhancement, and colorization) and 3D analysis ( e.g. , point cloud classification and segmentation). We compare the respective advantages and limitations of popular techniques both in terms of architectural design and their experimental value. Finally, we provide an analysis on open research directions and possible future works. We hope this effort will ignite further interest in the community to solve current challenges towards the application of transformer models in computer vision.