Conference PaperPDF Available

Current Challenges and Future Research Areas for Digital Forensic Investigation



Given the ever-increasing prevalence of technology in modern life, there is a corresponding increase in the likelihood of digital devices being pertinent to a criminal investigation or civil litigation. As a direct consequence, the number of investigations requiring digital forensic expertise is resulting in huge digital evidence backlogs being encountered by law enforcement agencies throughout the world. It can be anticipated that the number of cases requiring digital forensic analysis will greatly increase in the future. It is also likely that each case will require the analysis of an increasing number of devices including computers, smartphones, tablets, cloud-based services, Internet of Things devices, wearables, etc. The variety of new digital evidence sources pose new and challenging problems for the digital investigator from an identification, acquisition, storage and analysis perspective. This paper explores the current challenges contributing to the backlog in digital forensics from a technical standpoint and outlines a number of future research topics that could greatly contribute to a more efficient digital forensic process.
David Lillis, Brett A. Becker, Tadhg O’Sullivan and Mark Scanlon
School of Computer Science,
University College Dublin, Ireland.
{david.lillis, brett.becker, t.osullivan, mark.scanlon},
Given the ever-increasing prevalence of technology in modern life, there is a corresponding increase
in the likelihood of digital devices being pertinent to a criminal investigation or civil litigation. As
a direct consequence, the number of investigations requiring digital forensic expertise is resulting in
huge digital evidence backlogs being encountered by law enforcement agencies throughout the world.
It can be anticipated that the number of cases requiring digital forensic analysis will greatly increase
in the future. It is also likely that each case will require the analysis of an increasing number of
devices including computers, smartphones, tablets, cloud-based services, Internet of Things devices,
wearables, etc. The variety of new digital evidence sources poses new and challenging problems for
the digital investigator from an identification, acquisition, storage and analysis perspective. This
paper explores the current challenges contributing to the backlog in digital forensics from a technical
standpoint and outlines a number of future research topics that could greatly contribute to a more
efficient digital forensic process.
Keywords: Digital Evidence Backlog, Digital Forensic Challenges, Future Research Topics
The early 21st century has seen a dramatic
increase in new and ever-evolving technologies
available to consumers and industry alike. Gen-
erally, the consumer-level user base is now more
adept and knowledgeable about what technolo-
gies they employ in their day-to-day lives. The
number of cases where digital evidence is rele-
vant to an investigation is ever increasing and
it is envisioned that the existing backlog for law
enforcement will balloon in the coming years as
the prevalence of digital devices increases. It
is for these reasons that it is important to take
stock of the current state of affairs in the field of
digital forensics. Cloud based services, Internet-
of-Things devices, anti-forensic techniques, dis-
tributed and high capacity storage, and the sheer
volume and heterogeneity of pertinent devices
pose new and challenging problems for the ac-
quisition, storage and analysis of this digital ev-
Due to the sheer volume of data to be acquired,
stored, analysed and reported on, combined with
the level of expertise necessary to ensure the
court admissibility of the resultant evidence, it
was inevitable that a significant backlog in cases
awaiting analysis would occur [Hitchcock et al.,
2016]. Three particular aspects have contributed
to this backlog [Quick and Choo, 2014]:
1. An increase in the number of devices that
are seized for analysis per case.
2. The number of cases whereby digital evi-
dence is deemed pertinent is ever increasing.
3. The volume of potentially evidence-rich data
stored on each item seized is also increasing.
This backlog is having a significant impact
on the ideal legal process. According to a re-
port by the Garda Síochána Inspectorate [2015]
(Irish National Police), delays of up to four years
in conducting digital forensic investigations on
seized devices have “seriously impacted on the
timeliness of criminal investigations” in recent
years. In some cases, these delays have resulted
in prosecutions being dismissed in courts. This
issue regarding the digital evidence backlog is fur-
ther compounded due to the cross-border, intra-
agency cooperation required by many forensic
investigations. If a given country has an espe-
cially low digital investigative capacity, it can
have a significant knock-on effect in an interna-
tional context [James and Jang, 2014].
In this paper, we review relevant recent re-
search literature to elucidate the developments
and current challenges in the field. While much
progress has been made in the digital forensic
process in recent years, little work has made ap-
preciable progress in tackling the evidence back-
log in practice. While evidence is lying un-
analysed in an evidence store, investigations are
often left waiting for new leads to be discov-
ered, which has serious consequences for follow-
ing these new threads of investigation at a later
date. A number of practical infrastructural im-
provements to the current forensic process are
discussed including automation of device acquisi-
tion and analysis, Forensics-as-a-Service (FaaS),
hardware-facilitated heterogeneous evidence pro-
cessing, remote evidence acquisition, and cross-
jurisdictional evidence sharing over the Internet.
These infrastructural improvements will enable a
number of both new and improved forensic pro-
cesses. These may include data visualisation,
multi-device evidence and timeline resolution,
data deduplication for storage and acquisition
purposes, parallel or distributed investigations
and process optimisation of existing techniques.
The aforementioned improvements should com-
bine to aid law enforcement and private digi-
tal investigators to greatly expedite the current
forensic process. It is envisioned that the future
research areas presented as part of this paper will
influence further research in the field.
Raghavan [2013] outlined five major challenge ar-
eas for digital forensics, gathered from a survey
of research in the area:
1. The complexity problem, arising from data
being acquired at the lowest (i.e. binary)
format with increasing volume and hetero-
geneity, which calls for sophisticated data
reduction techniques prior to analysis.
2. The diversity problem, resulting naturally
from ever-increasing volumes of data, but
also from a lack of standard techniques to
examine and analyse the increasing numbers
and types of sources, which bring a plural-
ity of operating systems, file formats, etc.
The lack of standardisation of digital evi-
dence storage and the formatting of asso-
ciated metadata also unnecessarily adds to
the complexity of sharing digital evidence
between national and international law en-
forcement agencies [Scanlon and Kechadi,
3. The consistency and correlation problem re-
sulting from the fact that existing tools are
designed to find fragments of evidence, but
not to otherwise assist in investigations.
4. The volume problem, resulting from in-
creased storage capacities and the number
of devices that store information, and a lack
of sufficient automation for analysis.
5. The unified time lining problem, where mul-
tiple sources present different time zone
references, timestamp interpretations, clock
skew/drift issues, and the syntax aspects in-
volved in generating a unified timeline.
Numerous other researchers have identified
more specific challenges, which can generally be
categorised according to Raghavan’s above clas-
sification. Examples include Garfinkel [2010],
Wazid et al. [2013], and Karie and Venter [2015].
It is widely agreed that the volume of data that
is potentially relevant to investigations is grow-
ing rapidly. The amount of data per case at the
FBI’s 15 regional computer forensic laboratories
has grown 6.65 times between 2003-2011, from
84GB to 559GB [Roussev et al., 2013]. One cause
of this is the growth in storage capacities that
has occurred in recent years. Additionally, the
increasing proliferation of mobile and (IoT) de-
vices adds to the number of devices that require
examination in a given investigation. Beyond the
magnitude of the data, the use of cloud services
means that it may not be clear what data exists
and where it is actually located.
As advanced mobile and wearable technolo-
gies have continued to become more ubiquitous
amongst the general population, they also now
play a more prevalent role in digital forensic in-
vestigations. Over the past decade the capa-
bilities of these smart devices have reached a
point where they can function at a level near to
that of the average household computer and are
currently only limited by processing power and
storage capacity. This contributes to the diver-
sity problem, where a greater variety of devices
become candidates for digital forensic investiga-
tion (e.g. Baggili et al. [2015] has reported on
forensics on smart watches). Mobile and IoT de-
vices make use of a variety of operating systems,
file formats and communication standards, all of
which add to the complexity of digital investiga-
tions. In addition, embedded storage may not
be easily removable from devices, unlike for tra-
ditional desktop and server computers, and in
some cases a devices will lack persistent storage
entirely, necessitating expensive RAM forensics.
Investigating multiple devices also contributes
to the consistency and correlation problem,
where evidence gathered from distinct sources
must be correlated for temporal and logical con-
sistency. This is often performed manually: a sig-
nificant drain on investigators’ resources. The re-
quirements for RAM forensics also becomes per-
tinent in cases of anti-forensics, where a digital
criminal takes measures to avoid evidence being
acquired, including the creation of malware that
resides in RAM alone. The increasing sophis-
tication of digital criminals’ activities is also a
substantial challenge.
Other issues include limitations on bandwidth
for transferring data for investigation, the volatil-
ity of evidence, the fact that digital media has a
limited lifespan that may possibly result in evi-
dence being lost, and the increasing ubiquity of
encryption in modern communications and data
The following sections concentrate on a num-
ber of important emerging trends in modern com-
puting that contribute to the problems outlined
2.1 Internet-of-Things
The Internet-of-Things (IoT) refers to a vision of
everyday items that are connected to a network
and send data to one another. Juniper Research
[2015] estimate that there are already 13.4bn IoT
devices in existence 2015, and they expect this
figure to reach 38.5bn by 2020. These IoT de-
vices are typically deployed in two broad areas:
in the consumer domain (smart home, connected
vehicles, digital healthcare) and in the industrial
domain (retail, connected buildings, agriculture).
Some IoT devices are commonplace items that
have Internet connectivity added (e.g. refrigera-
tors, TVs), whereas others are newer sensing or
actuation devices that have been developed with
the IoT specifically in mind.
The IoT has the potential to become a rich
source of evidence from the physical world, and
as such it poses its own unique set of challenges
for digital forensic investigators [Hegarty et al.,
2014]. Compared to traditional digital forensics,
there is less certainty in where data originated
from, and where it is stored. Data persistence
may be a problem. IoT devices themselves typ-
ically have limited memory (and may have no
persistent data storage). Thus any data that is
stored for longer periods may be stored in some
in-network hub, or sent to the cloud for more
persistent storage. This therefore means that the
challenges related to cloud forensics (as discussed
below in Section 2.2) will likely apply in the IoT
domain also.
Already, some efforts have begun to analyse
IoT devices for forensics purposes (e.g. Suther-
land et al. [2014] on smart TVs), however this
work is in its early stages at present. The het-
erogeneous nature of IoT devices, including dif-
ferences in operating systems, filesystems and
communication standards, adds significantly to
the complexity, diversity and correlation prob-
lems for forensic investigators.
Ukil et al. [2011] outline some security con-
cerns of IoT researchers, which feed directly
into the desires of forensic investigators, incor-
porating issues such as availability, authentic-
ity and non-repudiation, which are important for
legally-sound use of the data. These are ad-
dressed using encryption technologies, which are
easy to incorporate into computationally pow-
erful devices that are connected to mains en-
ergy. However it becomes more of a challenge
for smaller, battery-operated, computationally-
constrained devices, where such considerations
may be sacrificed. This has inevitable conse-
quences for the usefulness of the data in a legal
2.2 Emerging Cloud Computing or
Cloud Forensic Challenges
Usage of cloud services such as Amazon Cloud
Drive, Office 365, Google Drive and Dropbox are
now commonplace amongst the majority of In-
ternet users. From a digital forensics point of
view, these services present a number of unique
challenges, as has been reported in the 2014 Na-
tional Institute of Standards and Technology’s
draft report [NIST, 2014]. Typically, data in
the cloud is distributed over a number of dis-
tinct nodes unlike more traditional forensic sce-
narios where data is stored on a single machine.
Due to the distributed nature of cloud services,
data can potentially reside in multiple legal juris-
dictions, leading to investigators relying on local
laws and regulations regarding the collection of
evidence [Simou et al., 2014, Ruan et al., 2013].
This can potentially increase the time, cost and
difficulty associated with a forensic investigation.
From a technical standpoint, the fact that a sin-
gle file can be split into a number of data blocks
that are then stored on different remote nodes
adds another layer of complexity thereby making
traditional digital forensic tools redundant [Chen
et al., 2015, Almulla et al., 2013].
Additionally, the Cloud Service Providers
(CSP) and their user base must be taken into
consideration. Investigators are reliant on the
willingness of CSPs to allow for the acquisition
and reproduction of data. The lack of stan-
dardisation among the varying CSPs, differing
levels of data security and their Service Level
Agreements are obstacles to both cloud foren-
sic researchers and investigators [Almulla et al.,
2013]. The multi-tenancy of many cloud sys-
tems poses three significant challenges to digi-
tal forensic investigations. In the majority of
cases the privacy and confidentiality of legitimate
users must be taken into account by investiga-
tors due to the shared infrastructures that sup-
port cloud systems [Morioka and Sharbaf, 2015].
The distributed nature of cloud systems along
with multi-tenancy can require the acquisition
of vast volumes of data leading to many of the
challenges outlined below. Finally, the use of IP
anonymity and the easy-to-use features of many
cloud systems, such as requiring minimal infor-
mation when signing up for a service, can lead
to situations where identifying a criminal is near
impossible [Chen et al., 2012, Ruan et al., 2013].
Cloud forensics also faces a number of chal-
lenges associated with traditional digital foren-
sic investigations. Encryption and other anti-
forensic techniques are commonly used in cloud-
based crimes. The limited time for which
forensically-important data is available is also an
issue with cloud-based systems. Due to the fact
that said systems are continuously running data,
can be overwritten at any time. Time of acqui-
sition has also proved a challenging task in re-
gard to cloud forensics. Thethi and Keane [2012]
showed that commonly-used forensic tools such
as the Linux dd command and Amazon’s AWS
Snapshot took a considerable amount of time to
acquire 30Gb of data from a cloud service.
While advances continue with regard to the
tools and techniques used in cloud forensics, the
aforementioned challenges continue to impede in-
vestigations. Henry et al. [2013] produced re-
sults showing that investigations on cloud-based
systems make up only a fraction of all digital
forensic investigations. Many investigations are
stalled beyond the point of a perpetrator’s owned
devices and rarely extend into the cloud-based
services they use. Results such as these form a
strong argument for continued research in this
3.1 Distributed Processing
Distributed Digital Forensics has been discussed
for some time [Roussev and Richard III, 2004,
Shanmugasundaram et al., 2003, Garfinkel et al.,
2009, Beebe, 2009]. However there is more scope
for it to be put into practice. Roussev et al. [2013]
cite two main reasons that the processing speed
of current generation digital forensic tools is in-
adequate for the average case: First, users have
failed to formulate explicit performance require-
ments and second, developers have failed to put
performance as a top-level concern in line with
reliability and correctness. They proposed and
validated a new approach to target acquisition
that enables file-centric processing without dis-
rupting optimal data throughput from the raw
device. Their evaluation of core forensic pro-
cessing functions with respect to processing rates
shows intrinsic limitations in both desktop and
server scenarios. Their results suggest that with
current software, keeping up with a commodity
SATA HDD at 120 MB/s requires between 120
and 200 cores.
3.2 HPC and Parallel Processing
Despite the bottleneck of many digital forensic
operations being disk read speed, there are steps
in the process that are not limited by the physi-
cal read speed of the storage device. For instance
the analysis phase can consume large amounts of
time by computers and humans. High perfor-
mance computing (HPC) advantages should be
employed wherever possible to reduce computa-
tion time, and in an effort to reduce the time re-
quired by humans. Traditional HPC techniques
normally exploit some level of parallelism, and
to date have been underexploited by the digi-
tal forensic community. There are many applica-
tions where HPC techniques and hardware could
be employed, for instance on expediting each part
of the digital forensic process after the acquisi-
tion phase, i.e., preprocessing, storage, analysis
and reporting.
3.3 GPU-Powered Multi-threading
GPUs excel at “single instruction, multiple data”
(SIMD) computations with large numbers of
general-purpose stream processors that can ex-
ecute massively threaded algorithms for a num-
ber of applications and stand to do so for many
digital forensics requirements in theory.
Marziale et al. [2007], noted that GPUs have
traditionally been both difficult to program and
targeted at very specific problems. More re-
cently, multicore CPUs coupled with GPU ac-
celerators have been widely used in high perfor-
mance computing due to better power efficiency
and performance/price ratio [Zhong et al., 2012].
In addition, there is now a multitude of inte-
grated GPUs that are on the same silicon die
as the CPU, bringing both easier programming
models and greater efficiency.
With new heterogeneous architectures and
programming models such as these, powerful and
efficient computer systems can be found in work-
stations with transparent access to CPU virtual
addresses and very low overhead for computation
offloading, and Power et al. [2015] have shown
such architectures to be advantageous in ana-
lytic processing. These seem very well suited for
many digital forensics applications, particularly
as technologies such as SSDs reduce the I/O bot-
Nonetheless, the use of GPUs in digital foren-
sics is largely absent from the literature and there
are few standard digital forensic tools that utilise
GPU acceleration. Marziale et al. [2007] mea-
sured the effectiveness of offloading processing
typical to digital forensics tools (such as file carv-
ing) to GPUs and found significant performance
gains compared to simple threading techniques
on multicore CPUs. Although the programming
of the GPUs was more complex, the authors
found that the effort was worth the performance
gains. Collange et al. [2009] researched the feasi-
bility of employing GPUs to accelerate the detec-
tion of sectors from contraband files using sector-
level hashes.
Their application was able to inspect several
disk drives simultaneously and asynchronously
from each other. In addition, disks from different
computers can be inspected independently by the
application. This approach indicated that the
use of GPUs is viable.
However, Zha and Sahni [2011] employed
multi-pattern search algorithms to reduce the
time needed for file carving with Scalpel, showing
that the limiting factor for performance is disk
read time. The authors state there is no advan-
tage to using GPUs, at least until mechanisms
to read the disk faster are found. However, this
conclusion assumes only one disk, and the tra-
ditional digital forensic model. In the new era
of cloud forensics, SSDs, and other technological
evolutions, this I/O bottleneck will be much less
Iacob et al. [2015] have employed GPUs in
information retrieval cases where response time
is of importance, similarly to DF. They demon-
strate significant speed-up of two Bloom filter op-
erations, which are used in approximate match-
ing forensic applications [Breitinger and Roussev,
GPUs, like many new technologies, present
new considerations for digital forensics. Breß
et al. [2013] researched the use of GPUs to pro-
cess confidential/sensitive information and found
that data in GPU RAM is retrievable by unau-
thorised users by creating a dump of device mem-
ory. However this does not impede the use
of GPUs for processing confidential information
when the system itself is only accessible to au-
thorised users.
3.4 DFaaS
Digital Forensics as a Service (DFaaS) is a mod-
ern extension of the traditional digital forensic
process. Since 2010, the Netherlands Forensic
Institute (NFI) have implemented a DFaaS solu-
tion in order to combat the volume of backlogged
cases [van Baar et al., 2014]. This DFaaS solu-
tion takes care of much of the storage, automa-
tion, investigator enquiry in the cases it man-
ages. van Baar et al. [2014] describe the ad-
vantages of the current system including efficient
resource management, enabling detectives to di-
rectly query the data, improving the turn around
time between forming a hypothesis in an inves-
tigation its confirmation based on the evidence,
and facilitating easier collaboration between de-
tectives working on the same case through anno-
tation and shared knowledge.
While the aforementioned DFaaS system is a
significant step in the right direction, many im-
provements to the current model could greatly
expedite and improve upon the current process.
This includes improving the functionality avail-
able to the case detectives, improving its current
indexing capabilities and on-the-fly identification
of incriminating evidence during the acquisition
process [van Baar et al., 2014].
Seeing as the DFaaS model is a cloud-based,
remote access model, two significant disadvan-
tages to the model are potential latency in us-
ing the online platform and being dependant on
the upload bandwidth available during the physi-
cal storage acquisition phase of the investigation.
A deduplicated evidence storage system, such as
that described by Watkins et al. [2009], would fa-
cilitate the faster acquisition with each unique file
across a number of investigations only needing
to be stored, indexed, analysed and annotated
once on the system. Eliminating non-pertinent,
benign files during the acquisition phase of the
investigation would greatly reduce the acquisi-
tion time (e.g., operating system, application,
previously acquired non-incriminating files, etc.).
This could greatly expedite pertinent informa-
tion being available to the detectives working on
the case as early as possible in the investigation.
In order for any evidence to be court admissible,
a forensically sound entire disk image would need
to be reconstructible from the deduplicated data
store, improving upon the system proposed by
Watkins et al. [2009]. Employing such a system
would also facilitate a cloud-to-cloud based stor-
age event monitoring of virtual systems as merely
the changes of the virtual storage would need to
be stored between each acquisition.
3.5 Field-programmable Gate Arrays
FPGAs are integrated circuits that can be con-
figured after manufacture. FPGAs can imple-
ment any function that application-specific in-
tegrated circuits can, and offer several advan-
tages over traditional CPUs. FPGAs can exploit
inherent algorithmic parallelism (including low-
level parallelism), and can often achieve results
in fewer logic operations compared to traditional
general purpose CPUs, resulting in faster pro-
cessing times. FPGAs have recently found ap-
plication in areas such as digital signal process-
ing, imaging and video applications, and cryp-
tography. Despite demonstrating desirable traits
for digital forensics researchers, they have yet to
be exploited for non-I/O-bound facets of digital
forensics. Furthermore, as SSDs and other tech-
nologies ease the I/O bottleneck, FPGAs stand
to be more broadly applicable in digital forensics.
3.6 Applying Complementary
Cutting Edge Research to
Current investigation practice involves the anal-
ysis of data on standalone workstations. As such,
the sophistication of the techniques that can be
practically employed are limited. Much research
has been conducted in a variety of areas that has
theoretical relevance to digital forensics, but has
been impractical to apply to date. A movement
towards DFaaS and high-performance comput-
ing, as discussed above, offers advantages beyond
merely expediting the techniques currently used
in forensics investigations, which remain reliant
on manual input. It also promises a situation
where this complementary research may practi-
cally be brought to bear on digital forensic inves-
One such research area is that of Informa-
tion Retrieval (IR). Traditionally, IR is concerned
with identifying documents within a corpus that
help to satisfy a user’s “information need”. Tra-
ditionally, IR researchers have been faced with
the trade-off between the competing goals of pre-
cision (retrieving only relevant documents) and
recall (retrieving all the relevant documents),
whereby improving on one of these metrics typ-
ically results in a reduction in the other. In IR
for legal purposes, recall has long been acknowl-
edged as being the more important metric, given
that a single missing relevant document could
have serious consequences for the prosecution of
a criminal case, the enforcement of a contract,
etc. However, focussing on recall frequently re-
sults in an investigator being required to manu-
ally sift through a large quantity of non-relevant
documents. This is in contrast to web search,
for example, where users typically do not require
all of the relevant documents to be retrieved, of
which there may possibly be millions. Instead,
a web searcher wishes to avoid wasting time on
non-relevant material.
IR for digital forensics is often seen as a typ-
ical example of legal information retrieval (e.g.
by Beebe and Clark [2007]). Although this is
certainly true at the point a case is being built
for court, it could be argued that the level of
recall required at the triage stage can be sacri-
ficed somewhat for greater precision, in order to
allow investigators make speedy decisions about
whether a given device should be investigated
fully. Thus there is the potential for configurable
IR systems to be utilised in forensics investiga-
tions, whose focus will change depending on the
stage of the investigation.
The primary advantage of applying IR tech-
niques to digital investigations is that once the
initial preprocessing stage has been completed,
searches can be conducted extremely quickly.
Furnas et al. [1987] has shown that less than 20%
of searchers choose the same keywords for topics
they are interested in. This suggests that many
queries must be run to achieve full recall, and
also suggests that standard IR techniques such
as query expansion and synonym matching could
also be applied to increase recall.
However, increasing recall typically reduces
precision by also retrieving non-relevant docu-
ments as false positives. There are a number
of ways in which this problem can be alleviated.
The use of the aforementioned data deduplica-
tion techniques would eliminate standard sys-
tem files from consideration (Beebe and Diet-
rich [2007] note that the word “kill” appears as
a command in many system files). Additionally,
common visualisation approaches such as rank-
ing [Beebe and Liu, 2014] and clustering [Beebe
et al., 2011] are likely to help investigators in
their manual search of retrieved documents.
Another consideration is that event timeline
reconstruction is extremely important in a crim-
inal investigation [Chabot et al., 2014]. When
constructing a timeline from digital evidence,
some temporal data is readily available (e.g.
chat logs, file modification times, email times-
tamps, etc.), although it should be acknowledged
that even this is not without its own challenges.
Within the IR community, much research has
been conducted into the extraction of tempo-
ral information from unstructured text [Campos
et al., 2014]. This can be used to dramatically
reduce the manual load on investigators in this
In this paper a number of current challenges in
the field of digital forensics are discussed. Each
of these challenges in isolation can hamper the
discovery of pertinent information for digital in-
vestigators and detectives involved in a multi-
tude of different cases requiring digital forensic
analysis. Combined, the negative effect of these
challenges can be greatly amplified. These is-
sues alongside limited expertise and huge work-
loads has resulted in the digital evidence back-
log increasing to the order of years for many
law enforcement agencies worldwide. The pre-
dicted ballooning of case volume in the near fu-
ture will serve to further compound the backlog
problem – particularly as the volume of evidence
from non-traditional sources, such as cloud-based
and Internet-of-Things sources, is also likely to
In terms of research directions, practices al-
ready in place in many Computer Science sub-
disciplines hold promise for addressing these
challenges including those in distributed, paral-
lel, GPU and FPGA processing, and informa-
tion retrieval. More intelligent deduplicated ev-
idence data storage and analysis techniques can
help eliminate the duplicated processing and du-
plicated expert analysis of previously content.
These research directions can be applied to the
traditional digital forensics process to help com-
bat the aforementioned backlog through more ef-
ficient allocation of precious digital forensic ex-
pert time through the improvement and expedi-
tion of the process itself.
S. Almulla, Y. Iraqi, and A. Jones. Cloud
Forensics: A Research Perspective. In
Innovations in Information Technology (IIT),
2013 9th International Conference on, pages
66–71, March 2013.
Ibrahim Baggili, Jeff Oduro, Kyle Anthony,
Frank Breitinger, and Glenn McGee. Watch
What You Wear: Preliminary Forensic
Analysis of Smart Watches. In 2015 10th
International Conference on Availability,
Reliability and Security, pages 303–311.
IEEE, Aug 2015.
Nicole Beebe. Digital Forensic Research: The
Good, the Bad and the Unaddressed. In
Advances in Digital Forensics V, pages 17–36.
Springer, 2009.
Nicole Beebe and Glenn Dietrich. A New
Process Model for Text String Searching. In
Advances in Digital Forensics III, pages
179–191. Springer, 2007.
Nicole Lang Beebe and Jan Guynes Clark.
Digital Forensic Text String Searching:
Improving Information Retrieval Effectiveness
by Thematically Clustering Search Results.
Digital Investigation, 4(S1):49–54, 2007.
Nicole Lang Beebe and Lishu Liu. Ranking
Algorithms for Digital Forensic String Search
Hits. Digital Investigation, 11(S2):314–322,
Nicole Lang Beebe, Jan Guynes Clark, Glenn B.
Dietrich, Myung S. Ko, and Daijin Ko.
Post-Retrieval Search Hit Clustering to
Improve Information Retrieval Effectiveness:
Two Digital Forensics Case Studies. Decision
Support Systems, 51(4):732–744, 2011.
Frank Breitinger and Vassil Roussev.
Automated Evaluation of Approximate
Matching Algorithms on Real Data. Digital
Investigation, 11:S10–S17, 2014.
Sebastian Breß, Stefan Kiltz, and Martin
Schäler. Forensics on GPU Coprocessing in
Databases–Research Challenges, First
Experiments, and Countermeasures. In BTW
Workshops, pages 115–129. Citeseer, 2013.
Ricardo Campos, Gaël Dias, Alípio M Jorge,
and Adam Jatowt. Survey of Temporal
Information Retrieval and Related
Applications. ACM Computing Surveys
(CSUR), 47(2):15, 2014.
Yoan Chabot, Aurélie Bertaux, Tahar Kechadi,
and Christophe Nicolle. Event
Reconstruction: A State of the Art.
Handbook of Research on Digital Crime,
Cyberspace Security, and Information
Assurance, page 15, 2014.
Guangxuan Chen, Yanhui Du, Panke Qin, and
Jin Du. Suggestions to digital forensics in
cloud computing era. In Network
Infrastructure and Digital Content
(IC-NIDC), 2012 3rd IEEE International
Conference on, pages 540–544, Sept 2012.
Lei Chen, Lanchuan Xu, Xiaohui Yuan, and
N. Shashidhar. Digital Forensics in Social
Networks and the Cloud: Process,
Approaches, Methods, Tools, and Challenges.
In Computing, Networking and
Communications (ICNC), 2015 International
Conference on, pages 1132–1136, Feb 2015.
Sylvain Collange, Yoginder S Dandass, Marc
Daumas, and David Defour. Using graphics
processors for parallelizing hash-based data
carving. In System Sciences, 2009. HICSS’09.
42nd Hawaii International Conference on,
pages 1–10. IEEE, 2009.
George W. Furnas, Thomas K. Landauer,
Louis M. Gomez, and Susan T. Dumais. The
Vocabulary Problem in Human-System
Communication. Communications of the
ACM, 30(11):964–971, 1987.
Garda Síochána Inspectorate. Changing
Policing in Ireland, November 2015.
Simson Garfinkel, Paul Farrell, Vassil Roussev,
and George Dinolt. Bringing Science to
Digital Forensics with Standardized Forensic
Corpora. Digital Investigation, 6:S2–S11,
Simson L Garfinkel. Digital Forensics Research:
The Next 10 Years. Digital Investigation, 7:
S64–S73, 2010.
Robert C. Hegarty, David J. Lamb, and Andrew
Attwood. Interoperability Challenges in the
Internet of Things. In Paul Dowland, Steven
Furnell, and Bogdan Ghita, editors, Proc. of
the 10th International Network Conference
(INC 2014), pages 163–172. Plymouth
University, 2014.
Paul Henry, Jacob Williams, and Benjamin
Wright. The SANS Survey of Digital
Forensics and Incident Response. In Tech
Rep, July 2013.
Ben Hitchcock, Nhien-An Le-Khac, and Mark
Scanlon. Tiered Forensic Methodology Model
for Digital Field Triage by Non-Digital
Evidence Specialists. Digital Investigation, 13
(S1), 03 2016.
Alexandru Iacob, Lucian Itu, Lucian Sasu,
Florin Moldoveanu, and Constantin Suciu.
Gpu accelerated information retrieval using
bloom filters. In System Theory, Control and
Computing (ICSTCC), 2015 19th
International Conference on, pages 872–876.
IEEE, 2015.
Joshua I James and Yunsik Jake Jang.
Measuring Digital Crime Investigation
Capacity to Guide International Crime
Prevention Strategies. In Future Information
Technology, pages 361–366. Springer, 2014.
Juniper Research. The Internet of Things:
Consumer, Industrial & Public Services
2015-2020, July 2015.
Nickson M Karie and Hein S Venter. Taxonomy
of Challenges for Digital Forensics. Journal of
Forensic Sciences, 60(4):885–893, 2015.
Lodovico Marziale, Golden G Richard, and
Vassil Roussev. Massive Threading: Using
GPUs to Increase the Performance of Digital
Forensics Tools. Digital Investigation, 4:
73–81, 2007.
E. Morioka and M.S. Sharbaf. Cloud
Computing: Digital Forensic Solutions. In
Information Technology - New Generations
(ITNG), 2015 12th International Conference
on, pages 589–594, April 2015.
NIST. NIST Cloud Computing Forensic Science
Challenges. 2014.
Jason Power, Yinan Li, Mark D Hill, Jignesh M
Patel, and David A Wood. Toward GPUs
Being Mainstream in Analytic Processing.
Darren Quick and Kim-Kwang Raymond Choo.
Impacts of Increasing Volume of Digital
Forensic Data: A Survey and Future Research
Challenges. Digital Investigation, 11(4):
273–294, 2014.
Sriram Raghavan. Digital Forensic Research:
Current State of the Art. CSI Transactions
on ICT, 1(1):91–114, 2013.
Vassil Roussev and Golden G Richard III.
Breaking the Performance Wall: The Case for
Distributed Digital Forensics. In Proc. of the
2004 Digital Forensics Research Workshop
(DFRWS), volume 94, 2004.
Vassil Roussev, Candice Quates, and Robert
Martell. Real-time Digital Forensics and
Triage. Digital Investigation, 10(2):158–167,
Keyun Ruan, Joe Carthy, Tahar Kechadi, and
Ibrahim Baggili. Cloud Forensics Definitions
and Critical Criteria for Cloud Forensic
Capability: An Overview of Survey Results.
Digital Investigation, 10(1):34 – 43, 2013.
Mark Scanlon and M-Tahar Kechadi. Digital
Evidence Bag Selection for P2P Network
Investigation. In Proc. of the 7th
International Symposium on Digital Forensics
and Information Security, pages 307–314.
Springer, Gwangju, South Korea, 2014.
Kulesh Shanmugasundaram, Nasir Memon,
Anubhav Savant, and Herve Bronnimann.
ForNet: A Distributed Forensics Network. In
Computer Network Security, pages 1–16.
Springer, 2003.
Stavros Simou, Christos Kalloniatis, Evangelia
Kavakli, and Stefanos Gritzalis. Cloud
Forensics Solutions: A Review. In Lazaros
Iliadis, Michael Papazoglou, and Klaus Pohl,
editors, Advanced Information Systems
Engineering Workshops, volume 178 of
Lecture Notes in Business Information
Processing, pages 299–309. Springer
International Publishing, 2014.
Iain Sutherland, Huw Read, and Konstantinos
Xynos. Forensic Analysis of Smart TV: A
Current Issue and Call to Arms. Digital
Investigation, 11(3):175–178, sep 2014.
Neha Thethi and Anthony Keane. Digital
Forensics Investigations in the Cloud. In
IEEE International Advance Computing
Conference (IACC), Sept 2012.
Arijit Ukil, Jaydip Sen, and Sripad Koilakonda.
Embedded Security for Internet of Things. In
2011 2nd National Conference on Emerging
Trends and Applications in Computer
Science, pages 1–6. IEEE, mar 2011.
RB van Baar, HMA van Beek, and EJ van Eijk.
Digital Forensics as a Service: A Game
Changer. Digital Investigation, 11:S54–S62,
Kathryn Watkins, Mike McWhorte, Jeff Long,
and Bill Hill. Teleporter: An Analytically and
Forensically Sound Duplicate Transfer
System. Digital Investigation, 6:S43–S47,
Mohammad Wazid, Avita Katal, RH Goudar,
and Smitha Rao. Hacktivism Trends, Digital
Forensic Tools and Challenges: A Survey. In
Information & Communication Technologies
(ICT), 2013 IEEE Conference on, pages
138–144. IEEE, 2013.
Xinyan Zha and Sartaj Sahni. Fast in-Place File
Carving for Digital Forensics. In Forensics in
Telecommunications, Information, and
Multimedia, pages 141–158. Springer, 2011.
Ziming Zhong, Vladimir Rychkov, and Alexey
Lastovetsky. Data Partitioning on
Heterogeneous Multicore and Multi-GPU
Systems Using Functional Performance
Models of Data-Parallel Applications. In
Cluster Computing (CLUSTER), 2012 IEEE
International Conference on, pages 191–199.
IEEE, 2012.
... For digital investigators, new and complicated challenges are presented by the variety of digital evidence sources, the creation and exchange of information, the occurrences inside forums, and other Online broadcasting media. The following are three of the major challenges that still require further development: 1) Evidence authentication, 2) Acquisition, and origin traceability, 3) Archiving and evaluation [1]. ...
... Acquisition and tracing the origin of the evidence is an additional concern to law enforcement agencies. All the extracted information needs to be stored in forensic/cloud stations even though they may not be relevant [1]. Furthermore, keeping track of the owners of the devices and the origin of the evidence that is being used in the courtroom is another challenge that yet needs to be resolved [6]. ...
... Among the most recent deep learning algorithms that are used for the task of deepfake classification, EfficientNet model has illustrated that it can benefit from a more effective use of model parameters in order to get a more accurate conclusion about the entire video [17] which is the integrated algorithm in our work. Furthermore there requires to be a storage solution due to the increased number of 1) devices that are taken for each case's processing, and 2) situations in which digital evidence is judged relevant, and 3) the quantity of potentially abundant evidence contained on each confiscated item [1]. Taeb et al. [2] underlines the necessity and the high need for a forensically sound system that executes targeted data extraction to cope with the challenges of court authorization, consent agreements, and preservation of phone operating system images. ...
... Additionally, the storage and retrieval of large amounts of digital data can be challenging, requiring advanced storage and processing capabilities to handle massive volumes of data generated by smart environments. This can create logistical and financial challenges for organizations that need to invest in the necessary infrastructure to handle the data [115]. ...
Digital forensics in smart environments is an emerging field that deals with the investigation and analysis of digital evidence in smart devices and environments. As smart environments continue to evolve, digital forensic investigators face new challenges in retrieving, preserving, and analyzing digital evidence. At the same time, recent advancements in digital forensic tools and techniques offer promising solutions to overcome these challenges. In this survey, we examine recent advancements and challenges in digital forensics within smart environments. Specifically, we review the current state-of-the-art techniques and tools for digital forensics in smart environments and discuss their strengths and limitations. We also identify the major challenges that digital forensic investigators face in smart environments and propose potential solutions to overcome these challenges. Our survey provides a comprehensive overview of recent advancements and challenges in digital forensics in the age of smart environments, and aims to inform future research in this area.
... In present domain, everything seems to be connected with Internet. Billions and billions of machines and things which include cars, homes, workplaces, watches, glasses, home appliances and possibly all other physical objects that strike our mind are being connected to Internet thus providing remote access to visualize and collect data [8]. Although, with the advent of Internet-of-Things (Io T) the life has become more comfortable but at the same time it has provided an edge to cyber criminals in terms of security and privacy. ...
Due to the abundance of the Internet of Things (IoT), smart devices are widely utilized which helps to manage human surroundings and senses inside and outside environments. The huge amount of data generated from the IoT device attracts cyber-criminals in order to gain information from the significant relationship between people and smart devices. Cyber-attacks on IoT pose a severe challenge for forensic experts. Researchers have invented many techniques to solve IoT forensic challenges and to have an in-depth knowledge of all the facts internal as-well-as external architecture of IoT needs to be understood. In this paper, an attempt has been made to understand the relationship between security and forensics incorporating its strengths and weaknesses, which has not been explored till date to the best of our knowledge. An attempt has also been made to classify literature into three categories: physical level, network level, and cloud level. These include evidence sources, areas of IoT forensics, potential forensic information, evidence extraction techniques, investigation procedures, and legal issues. Also, some prominent IoT forensic use cases have been recited along with providing the key requirements for forensic investigation. Finally, possible research problems in IoT forensic have been identified.
The number of Internet of Things (IoT) forensic investigations has increased considerably over recent years due to the weak nature of the security measures of its devices. In order to ensure the effectiveness and completeness of their examinations, investigators rely on forensic models, frameworks and methodologies. However, given the novelty of the environment, the existing ones are not refined enough, and the conventional counterparts do not satisfy the requirements of the IoT. Consequently, further improvements are needed in order for a more suitable IoT methodology to be designed. After reviewing the proposals from the research community for the development of procedures for performing IoT investigations, this article presents a practical concept methodology for conducting IoT forensic investigations that details step by step the whole examination process from its opening to its closing. In order to test its effectiveness and feasibility, it is submitted to a theoretical, a practical and a hybrid evaluation. Firstly, by comparing its level of detail, practicality and content with the related work. Secondly, by assessing its performance in two practical scenarios that depict real-life forensic investigations and the challenges that they present. And, finally, by studying how the existing models from the research community would have behaved in these cases. After performing these three different evaluations, it can be concluded that the results achieved by the proposed methodology were satisfactory, confirmed the feasibility of the proposal and showed clear benefits compared with the related work in terms of practicality and level of detail.
Full-text available
Digital forensics is a collection of pre-defined processes or tasks used in the course of a criminal investigation, with some technical implementation specifics shared with traditional forensics for managing and collecting technical evidence information. Although a variety of digital forensic investigation frameworks have been offered by numerous researchers and practitioners. The inquiry procedure becomes hard due to numerous technical and legal details. To break down the technological barriers that exist between investigators, information technologists, and legal practitioners, the researcher must present a technical-independent framework that can bring all of these duties together. This study emphasized a critical principle of digital forensics investigations (Obtaining authorization, documentation, information flow, preservation, collection of evidence, and evidence analysis). Based on this technique, the author defines five questions for digital forensic inquiry. An expert in digital forensics A digital forensics investigation algorithm is created by incorporating these five sets of queries. We'll go over how this new algorithm can work with legal counsel as part of a larger digital forensics investigation framework.
Full-text available
The topic of the thesis is the cognitive and human factor’s influence on the evidential value of digital traces. Digital evidence is of high importance for solving crime. Therefore, it is essential that digital traces are collected, examined, analysed, and presented in a way that safeguards their evidential value and minimises erroneous or misleading outcomes. A large body of research has been concerned with developing new methods, tools, processes, procedures, and frameworks for handling new technology or novel implementations of technology. In contrast, relatively few empirical studies have examined digital forensics (DF) practice. The thesis contributes to filling this knowledge gap by providing novel insights concerning DF investigative practices during the analysis and presentation stages of the DF process. The thesis draws on theories and research from several scholarly traditions, such as DF, forensic science, police science, and cognitive psychology, as well as social science traditions such as digital criminology and science and technology studies (STS). A mixed-methods approach is applied to explore the research question: How could a better understanding of the DF practitioners’ role in constructing digital evidence within a criminal investigation enable mitigation of errors and safeguard a fair administration of justice? The thesis is made up of five articles exploring the research question from different perspectives. The first article aims to bring insights about cognitive bias from the forensic science domain into the DF discipline and discusses their relevance and plausible implications to DF casework. The analysis suggests that cognitive and human factors influence decision-making during the DF process and that there is a risk of bias in all its stages. The second article applies an experimental design (the DF experiment). The article examines two aspects of DF decision-making: First, whether DF practitioners’ decision-making is biased by contextual information, and second, whether those who receive similar information produce consistent results (between-practitioner reliability). The results indicate that the context influenced the number of traces discovered by the DF practitioners and showed low between-practitioner reliability for those receiving similar contexts. The third article applies a qualitative lens to examine how the low between-practitioner reliability materialises itself in the DF reports and whether and how the trace descriptions influence the evidential value in a legal context. The article demonstrates how the DF practitioners interpret the same traces differently and develops the concept of “evidence elasticity” to describe the interpretative flexibility of digital traces. The article shows how the evidence elasticity of the digital traces enables the construction of substantially different narratives related to the criminal incident and how this sometimes may result in misinformation with the propensity to mislead actors in the criminal justice chain. The fourth article is based on a survey of the DF practitioners’ accounts of their investigative practice during the DF experiment. The article explores how they handled contextual information, examiner objectivity, and evidence reliability during the analysis of an evidence file. The results show that many started the analysis with a single hypothesis in mind, which introduces a risk of a one-sided investigation. Approximately a third of the DF participants did not apply any techniques to safeguard examiner objectivity or control evidence reliability. The fifth article examines the DF practitioners’ reporting and documentation practices. It centres on the conclusion types, the content relevant to the evidence value, and the applied (un)certainty expressions. The results were compared to a study of eight forensic science disciplines. The analysis showed that the DF practitioners typically applied categorical conclusions or strength of support conclusion types. They used a plethora of certainty expressions but lacked an explanation of their meaning or reference to an established framework. However, the most critical finding was substantial deficiencies in documentation practices for content essential for enabling audit of the DF investigative process and results, a challenge which also seemed shared with other forensic science disciplines.
Individuals' interactions with machines would change. The technology which started with using buttons have developed to the touchpad screen, which allows people to control devices. merely conversing with them Intelligent house assistants (IHAs), which allow customers to manage their smart devices, read their mail, and occasionally place orders, are becoming increasingly popular. As a result, it’s possible that they'll be discovered at a crime scene shortly for this reason. and are capable of carrying the weight of digital evidence. This research looked into the most famous Alexa Echo and Google Home devices. Documentation was uncovered when this came to forensic research and evidence that made up metadata. Then, by fabricating operations, Changing the device's name, establishing a bogus routine, and so on talent improvement on an individual basis. As a result of the research, cyber defence experts and professors working in this field were given information on the several sorts the digital evidence that can be found in home automation system acts. Anti-forensic was also used to elicit the distinction between actual and false activities
Conference Paper
Full-text available
Information retrieval is a technique used in search engines, advertisement placement and cognitive databases. With increasing amounts of data and stringent response time requirements, improving the underlying implementation of document retrieval becomes critical. To this end, we consider a Bloom filter, a simple randomized data structure that answers membership queries with no false negative and customizable false positive probability. Mainly, we focus on the speed-up of the algorithm by using a Graphics Processing Units (GPU) based implementation. Starting from a regular CPU implementation of the Bloom filter algorithm, we employ different optimization techniques on the two basic Bloom filter operations: mapping and querying. An important speed-up is achieved for both operations: over 300x for mapping, and over 20x for querying. Furthermore, we show that the number of hash functions used during the mapping operation, the number of files, and the number of query words have a significant effect on the execution time and the speed-up.
Conference Paper
Full-text available
Cloud computing technology attracted many Internet users and organizations the past few years and has become one of the hottest topics in IT. However, due to the newly appeared threats and challenges arisen in cloud computing, current methodologies and techniques are not designed for assisting the respective foren-sic processes in cloud environments. Challenges and issues introduced, require new solutions in cloud forensics. To date, the research conducted in this area concerns mostly the identification of the major challenges in cloud forensics. This paper focuses on the identification of the available technical solutions addressed in the respective literature that have an applicability on cloud computing. Fur-thermore it matches the identified solutions with the respective challenges already mentioned in the respective literature. Specifically, it summarizes the methods and the proposed solutions used to conduct an investigation, in comparison to the re-spective cloud challenges and finally it highlights the open problems in the area of cloud forensics.
Conference Paper
Full-text available
Due to budgetary constraints and the high level of training required, digital forensic analysts are in short supply in police forces the world over. This inevitably leads to a prolonged time taken between an investigator sending the digital evidence for analysis and receiving the analytical report back. In an attempt to expedite this procedure, various process models have been created to place the forensic analyst in the field conducting a triage of the digital evidence. By conducting triage in the field, an investigator is able to act upon pertinent information quicker, while waiting on the full report. The work presented as part of this paper focuses on the training of front-line personnel in the field triage process, without the need of a forensic analyst attending the scene. The premise has been successfully implemented within regular/non-digital forensics, i.e., crime scene investigation. In that field, front-line members have been trained in specific tasks to supplement the trained specialists. The concept of front-line members conducting triage of digital evidence in the field is achieved through the development of a new process model providing guidance to these members. To prove the model's viability, an implementation of this new process model is presented and evaluated. The results outlined demonstrate how a tiered response involving digital evidence specialists and non-specialists can better deal with the increasing number of investigations involving digital evidence.
Conference Paper
There have been a number of research proposals to use discrete graphics processing units (GPUs) to accelerate database operations. Although many of these works show up to an order of magnitude performance improvement, discrete GPUs are not commonly used in modern database systems. However, there is now a proliferation of integrated GPUs which are on the same silicon die as the conventional CPU. With the advent of new programming models like heterogeneous system architecture, these integrated GPUs are considered first-class compute units, with transparent access to CPU virtual addresses and very low overhead for computation offloading. We show that integrated GPUs significantly reduce the overheads of using GPUs in a database environment. Specifically, an integrated GPU is 3x faster than a discrete GPU even though the discrete GPU has 4x the computational capability. Therefore, we develop high performance scan and aggregate algorithms for the integrated GPU. We show that the integrated GPU can outperform a four-core CPU with SIMD extensions by an average of 30% (up to 3:2x) and provides an average of 45% reduction in energy on 16 TPC-H queries.
Today's Golden Age of computer forensics is quickly coming to an end. Without a clear strategy for enabling research efforts that build upon one another, forensic research will fall behind the market, tools will become increasingly obsolete, and law enforcement, military and other users of computer forensics products will be unable to rely on the results of forensic analysis. This article summarizes current forensic research directions and argues that to move forward the community needs to adopt standardized, modular approaches for data representation and forensic processing. © 2010 Digital Forensic Research Workshop. Published by Elsevier Ltd. All rights reserved.
Recently, using GPUs for coprocessing in database systems has been shown to be beneficial. However, information systems processing confidential data cannot benefit from GPU acceleration yet because knowledge of security issues and forensicexaminations on GPUs are still fragmentary. In this paper, we point out key challenges and research questions related to forensics and anti-forensics on GPUs. Our results and discussion are based on analogies from similar computation environments, and experiences. Initial experimental studies indicate that data in GPU RAM is retrievable by other processes. This can be done by creating a memory dump of device memory. Hence, application data is accessible by users without access permissions, by bypassing the access control system of the database management system. Finally, we discuss approaches, how our results can be used in forensic and anti-forensic scenarios.
Conference Paper
This work presents preliminary forensic analysis of two popular smart watches, the Samsung Gear 2 Neo and LG G. These wearable computing devices have the form factor of watches and sync with smart phones to display notifications, track footsteps and record voice messages. We posit that as smart watches are adopted by more users, the potential for them becoming a haven for digital evidence will increase thus providing utility for this preliminary work. In our work, we examined the forensic artifacts that are left on a Samsung Galaxy S4 Active phone that was used to sync with the Samsung Gear 2 Neo watch and the LG G watch. We further outline a methodology for physically acquiring data from the watches after gaining root access to them. Our results show that we can recover a swath of digital evidence directly form the watches when compared to the data on the phone that is synced with the watches. Furthermore, to root the LG G watch, the watch has to be reset to its factory settings which is alarming because the process may delete data of forensic relevance. Although this method is forensically intrusive, it may be used for acquiring data from already rooted LG watches. It is our observation that the data at the core of the functionality of at least the two tested smart watches, messages, health and fitness data, e-mails, contacts, events and notifications are accessible directly from the acquired images of the watches, which affirms our claim that the forensic value of evidence from smart watches is worthy of further study and should be investigated both at a high level and with greater specificity and granularity.
Digital forensics has been utilized in computer crime investigation for last thirty years. It has evolved and progressed around the technical revolutions, and is now facing yet another new era due to the emergence of cloud computing. Cloud computing is a new computing model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., Networks, servers, storage, applications, and services). The innovative nature of cloud computing has created unique challenges in the field of digital forensics. In this research paper, we investigate the forensic issues in cloud computing and provide possible solutions, guidelines, including case studies. The basics of traditional digital forensics and cloud computing are also discussed.
As cloud computing and social networks become ubiquitous in our modern world, what come along with the nearly infinite storage and computing power are the security, privacy, and digital forensic challenges. Due to the completely different ways of data storage and processing in the cloud and social networks compared to their traditional counterparts, digital forensics practitioners are in need to establish new forensic process and find novel approaches, methods, and tools to maintain the efficiency and performance of their investigations. This paper examines latest studies of the process, challenges, approaches, methods, and tools of digital forensics in the cloud and social network environments, aiming to provide the audience new perspectives and recommendations in the related fields.
Since its inception, over a decade ago, the field of digital forensics has faced numerous challenges. Despite different researchers and digital forensic practitioners having studied and analysed various known digital forensic challenges, as of 2013, there still exists a need for a formal classification of these challenges. This article therefore reviews existing research literature and highlights the various challenges that digital forensics has faced for the last 10 years. In conducting this research study, however, it was difficult for the authors to review all the existing research literature in the digital forensic domain; hence, sampling and randomization techniques were employed to facilitate the review of the gathered literature. Taxonomy of the various challenges is subsequently proposed in this paper based on our review of the literature. The taxonomy classifies the large number of digital forensic challenges into four well-defined and easily understood categories. The proposed taxonomy can be useful, for example, in future developments of automated digital forensic tools by explicitly describing processes and procedures that focus on addressing specific challenges identified in this paper. However, it should also be noted that the purpose of this paper was not to propose any solutions to the individual challenges that digital forensics face, but to serve as a survey of the state of the art of the research area. © 2015 American Academy of Forensic Sciences.