Digital Investigation

Published by Elsevier
Online ISSN: 1742-2876
Publications
Article
In recent years an increase in the complexity of games has subsequently demanded an upscale of the hardware in the consoles required to run them. It is not uncommon for games consoles to now feature many pieces of hardware similar to those found in a standard personal computer.In terms of forensics the most significant inclusion in today's games consoles is the storage media, whether they are flash based memory cards, or electro-mechanical hard disk drives. When combined with networks, particularly the internet, this inbuilt storage gives games consoles a host of new features, including the downloading of games, updating of console software/firmware, streaming media from different network locations, activities centred around social networking and usually involving user specific content being saved on the consoles storage media.This paper will look at analyzing the SATA hard disk drive contained in Microsoft's Xbox 360 games console. We present our findings and provide suggested basic guidelines for future investigations to be able to recover stored remnants of information from the drive.
 
Article
Today’s Golden Age of computer forensics is quickly coming to an end. Without a clear strategy for enabling research efforts that build upon one another, forensic research will fall behind the market, tools will become increasingly obsolete, and law enforcement, military and other users of computer forensics products will be unable to rely on the results of forensic analysis. This article summarizes current forensic research directions and argues that to move forward the community needs to adopt standardized, modular approaches for data representation and forensic processing.
 
Article
Homologous files share identical sets of bits in the same order. Because such files are not completely identical, traditional techniques such as cryptographic hashing cannot be used to identify them. This paper introduces a new technique for constructing hash signatures by combining a number of traditional hashes whose boundaries are determined by the context of the input. These signatures can be used to identify modified versions of known files even if data has been inserted, modified, or deleted in the new files. The description of this method is followed by a brief analysis of its performance and some sample applications to computer forensics.
 
Article
This paper presents a rigorous method for reconstructing events in digital systems. It is based on the idea, that once the system is described as a finite state machine, its state space can be explored to determine all possible scenarios of the incident. To formalize evidence, the evidential statement notation is introduced. It represents the facts conveyed by the evidence as a series of witness stories that restrict possible computations of the finite state machine. To automate event reconstruction, a generic event reconstruction algorithm is proposed. It computes the set of all possible explanations for the given evidential statement with respect to the given finite state machine.
 
Article
Forensic analysis requires the acquisition and management of many different types of evidence, including individual disk drives, RAID sets, network packets, memory images, and extracted files. Often the same evidence is reviewed by several different tools or examiners in different locations. We propose a backwards-compatible redesign of the Advanced Forensic Format—an open, extensible file format for storing and sharing of evidence, arbitrary case related information and analysis results among different tools. The new specification, termed AFF4, is designed to be simple to implement, built upon the well supported ZIP file format specification. Furthermore, the AFF4 implementation has downward comparability with existing AFF files.
 
Article
A prominent banking institution in the United States has submitted an application to have its Computer Forensics unit inspected as the first step towards attaining accreditation. Several other corporations and businesses that operate Computer Forensics units are also considering submitting their applications. This is in response to the American Society of Crime Laboratory Directors/Laboratory Accreditation Board's (ASCLD/LAB) accreditation program which began offering accreditation in the Digital Evidence Discipline in 2003. As defined in the ASCLD/LAB accreditation manual, any laboratory conducting forensic analysis in any of the four sub-disciplines of Digital Evidence (Audio Analysis, Computer Forensics, Digital Imaging Analysis, or Video Analysis) can apply for accreditation. This information is widely known in the forensic crime laboratory community, but most executives and examiners in the corporate and business sector are not aware that they also can apply for accreditation in the Digital Evidence discipline.
 
Article
Recovering evidential data from magnetic tapes in a forensically sound manner is a difficult task. There are many different tape technologies in existence today and an even greater number of archive formats used. This paper discusses the issues and challenges involved in the forensic acquisition and analysis of magnetic tapes. It identifies areas of slack space on tapes and discusses the challenges of low level acquisition of an entire length of tape. It suggests a basic methodology for determining the contents of a tape, acquiring tape files, and preparing them for forensic analysis.
 
Article
The acquisition of volatile memory from a compromised computer is difficult to perform reliably because the acquisition procedure should not rely on untrusted code, such as the operating system or applications executing on top of it. In this paper, we present a procedure for acquiring volatile memory using a hardware expansion card that can copy memory to an external storage device. The card is installed into a PCI bus slot before an incident occurs and is disabled until a physical switch on the back of the system is pressed. The card cannot easily be detected by an attacker and the acquisition procedure does not rely on untrusted resources. We present general requirements for memory acquisition tools, our acquisition procedure, and the initial results of our hardware implementation of the procedure.
 
Article
Network Address Translation (NAT) is a technology allowing a number of machines to share a single IP address. This presents a problem for network forensics since it is difficult to attribute observed traffic to specific hosts. We present a model and algorithm for disentangling observed traffic into discrete sources. Our model relies on correlation of a number of artifacts left over by the NAT gateway which allows identification of sources. The model works well for a small number of sources, as commonly found behind a home or small office NAT gateway.
 
Article
Onion routers were born about 10 years ago as a sort of blended military/research project. The main goal of it was the avoidance of traffic analysis (TA). TA is used in part to identify the remote IP addresses that a given host seeks to contact. This technique may have various purposes, from simple statistical analysis to illegal interception. In response, to protect personnel whose communications are being monitored by hostile forces, researchers from the US Naval Research Laboratory conceived a system, dubbed “Onion Routing”, that eludes the above two operations.In this author's opinion, the original “owners” of the project lost control of it. Software is freely available that enables anyone to utilize a network of onion routers. The result is a strong evolution (in terms of “privacy”) that is very difficult to manage, especially when an international investigation must be performed.
 
Article
Memory analysis is an integral part of any computer forensic investigation, providing access to volatile data not found on a drive image. While memory analysis has recently made significant progress, it is still hampered by hard-coded tools that cannot generalize beyond the specific operating system and version they were developed for. This paper proposes using the debug structures embedded in memory dumps and Microsoft’s program database (PDB) files to create a flexible tool that takes an arbitrary memory dump from any of the family of Windows NT operating systems and extract process, configuration, and network activity information. The debug structures and PDB files are incorporated into a memory analysis tool and tested against dumps from 32-bit Windows XP with physical address extensions (PAE) enabled and disabled, 32-bit Windows Vista with PAE enabled, and 64-bit Windows 7 systems. The results show the analysis tool is able to identify and parse an arbitrary memory dump and extract process, registry, and network communication information.
 
Article
An image of a computer's physical memory can provide a forensic examiner with a wealth of information. A small area of system memory, the nonpaged pool, contains lots of information about currently and formerly active processes. As this paper shows, more than 90% of such information can be retrieved even 24 h after process termination under optimum conditions.Great care must be taken as the acquisition process usually affects the memory contents to be acquired. In order minimize the impact on volatile data, this paper for the first time analyzes the pool allocation mechanism of the Microsoft Windows operating system. It describes a test arrangement, which allows to obtain a time series of physical memory images, while it also reduces the effect on the observed operating system.Using this environment it was found that allocations from the nonpaged pool are reused based on their size and a last in-first out schedule. In addition, a passive memory compaction strategy may apply. So, the creation of a new object is likely to eradicate the evidence of an object of the same class that was destructed just before. The paper concludes with a discussion of the implications for incident response procedures, forensic examinations, and the creation of forensic tools.
 
Article
Mobile devices are among the most disruptive technologies of the last years, gaining even more diffusion and success in the daily life of a wide range of people categories. Unfortunately, while the number of mobile devices implicated in crime activities is relevant and growing, the capability to perform the forensic analysis of such devices is limited both by technological and methodological problems. In this paper, we focus on Anti-Forensic techniques applied to mobile devices, presenting some fully automated instances of such techniques to Android devices. Furthermore, we tested the effectiveness of such techniques versus both the cursory examination of the device and some acquisition tools.
 
Article
There are no general frameworks with which we may analyze the anti-forensics situation. Solving anti-forensic issues requires that we create a consensus view of the problem itself. This paper attempts to arrive at a standardized method of addressing anti-forensics by defining the term, categorizing the anti-forensics techniques and outlining general guidelines to protect forensic integrity.
 
Article
In this paper we examine the use of the Windows Registry as a source of forensic evidence in digital investigations, especially related to Internet usage. We identify the sources of the information, along with the methods used and toolsets available for such examinations, and illustrate their use for recovering evidence. We highlight issues of the forensic practise related to Registry inspections and propose ideas for further improvements of the process and the tools involved. (c) 2006 Elsevier Ltd. All rights reserved.
 
Article
When a USB storage device, such as a thumb drive, is connected to a Windows system, several identifiers are created on the system. These identifiers, or artifacts, persist even after the system has been shut down. In many cases, these artifacts may be used for forensics purposes to identify specific devices that have been connected to the Windows system(s) in question.
 
Article
We utilize traces of demosaicing operation in digital cameras to identify the source camera-model of a digital image. To identify demosaicing artifacts associated with different camera-models, we employ two methods and define a set of image characteristics which are used as features in designing classifiers that distinguish between digital camera-models. The first method tries to estimate demosaicing parameters assuming linear model while the second one extracts periodicity features to detect simple forms of demosaicing. To determine the reliability of the designated image features in differentiating the source camera-model, we consider both images taken under similar settings at fixed sceneries and images taken under independent conditions. In order to show how to use these methods as a forensics tool, we consider several scenarios where we try to (i) determine which camera-model was used to capture a given image among three, four, and five camera-models, (ii) decide whether or not a given image was taken by a particular camera-model among very large number of camera-models (in the order of hundreds), and (iii) more reliably identify the individual camera, that captured a given image, by incorporating demosaicing artifacts with noise characteristics of the imaging sensor of the camera.
 
Article
Collecting data related to Internet threats has now become a relatively common task for security researchers and network operators. However, the huge amount of raw data can rapidly overwhelm people in charge of analyzing such data sets. Systematic analysis procedures are thus needed to extract useful information from large traffic data sets in order to assist the analyst's investigations. This work describes an analysis framework specifically developed to gain insights into honeynet data. Our forensics procedure aims at finding, within an attack data set, groups of network traces sharing various kinds of similar patterns. In our exploratory data analysis, we seek to design a flexible clustering tool that can be applied in a systematic way on different feature vectors characterizing the attacks. In this paper, we illustrate the application of our method by analyzing one specific aspect of the honeynet data, i.e. the time series of the attacks. We show that clustering attack patterns with an appropriate similarity measure provides very good candidates for further in-depth investigation, which can help us to discover the plausible root causes of the underlying phenomena. The results of our clustering on time series analysis enable us to identify the activities of several worms and botnets in the collected traffic.
 
e Solution matrix. 
e USB hack tools detection by commonly used Antivirus software. 
Article
Information security risks associated with Universal Serial Bus (USB) storage devices have been serious issues since 2003, which marked the wide adoption of USB technologies in the computing industry, especially in corporate networks. Due to the insecure design and the open standards of USB technologies, attackers have successfully exploited various vulnerabilities in USB protocols, USB embedded security software, USB drivers, and Windows Autoplay features to launch various software attacks against host computers and USB devices. The purposes of this paper are: (i) to provide an investigation on the currently identified USB based software attacks on host computers and USB storage devices, (ii) to identify the technology enablers of the attacks, and (iii) to form taxonomy of attacks. The results show that a multilayered security solution framework involving software implementations at the User Mode layer in the operating systems can help eliminate the root cause of the problem radically.
 
Article
There is an alarming increase in the number of cybercrime incidents through anonymous e-mails. The problem of e-mail authorship attribution is to identify the most plausible author of an anonymous e-mail from a group of potential suspects. Most previous contributions employed a traditional classification approach, such as decision tree and Support Vector Machine (SVM), to identify the author and studied the effects of different writing style features on the classification accuracy. However, little attention has been given on ensuring the quality of the evidence. In this paper, we introduce an innovative data mining method to capture the write-print of every suspect and model it as combinations of features that occurred frequently in the suspect's e-mails. This notion is called frequent pattern, which has proven to be effective in many data mining applications, but it is the first time to be applied to the problem of authorship attribution. Unlike the traditional approach, the extracted write-print by our method is unique among the suspects and, therefore, provides convincing and credible evidence for presenting it in a court of law. Experiments on real-life e-mails suggest that the proposed method can effectively identify the author and the results are supported by a strong evidence.
 
Article
In today's advanced and highly changing technological environment, the question of authorship of a document often transcends to the more traditional means of inscription. It now needs to be looked into from the perspective of involvement of complex digital technologies comprising Computer Systems, Software Programs, Scanners and Printers. The impact and threat of Digitized Document Frauds like counterfeiting of currency, stamps, judicial papers, educational certificates, degrees, cheques, will, property papers, licenses, security passes, badges, immigration and visa documents including the fraudulent generation of, or alteration in, commonly used documents on the economy and society are inevitable and growing; hence our Forensic Readiness is crucial. This work identifies the peculiar characteristics for the development of a non-destructive automated system for efficiently detecting and fixing the origin of the questioned documents by linking them to the scanner and printer used. This in turn may help establish the authorship of the questioned documents. This work is the first general experimental study for investigating Digitized Document Frauds in a forensically sound manner in the light of Locard's principle of information exchange, individuality principle, cause–effect principle and the principle of comparison.
 
Article
This paper proposes methods to automate recovery and analysis of Windows NT5 (XP and 2003) event logs for computer forensics. Requirements are formulated and methods are evaluated with respect to motivation and process models. A new, freely available tool is presented that, based on these requirements, automates the repair of a common type of corruption often observed in data carved NT5 event logs. This tool automates repair of multiple event logs in a single step without user intervention. The tool was initially developed to meet immediate needs of computer forensic engagements.Automating recovery, repair, and correlation of multiple logs make these methods more feasible for consideration in both a wider range of cases and earlier phases of cases, and hopefully, in turn, standard procedures. The tool was developed to fill a gap between capabilities of certain other freely available tools that may recover and correlate large volumes of log events, and consequently permit correlation with various other kinds of Windows artifacts. The methods are examined in the context of an example digital forensic service request intended to illustrate the kinds of civil cases that motivated this work.
 
Article
This article reviews recent empirical evidence garnered from inductive studies of insiders examining "who, what, where, when, why and how" of insider computer attacks. These results are then compared to insider "theories" and folklore. Then the use of a specific deductive profiling approach to insider investigation and case management is described along with illustrative case studies. The overall role of the behavioral consultant in insider cases is examined with emphasis on specific forms of support for the investigative team and aid to managers and security personnel with case management of insiders within corporate environments.
 
Article
Steganography is the art of hiding a message in plain sight. Modern steganographic tools that conceal data in innocuous-looking digital image files are widely available. The use of such tools by terrorists, hostile states, criminal organizations, etc., to camouflage the planning and coordination of their illicit activities poses a serious challenge. Most steganography detection tools rely on signatures that describe particular steganography programs. Signature-based classifiers offer strong detection capabilities against known threats, but they suffer from an inability to detect previously unseen forms of steganography. Novel steganography detection requires an anomaly-based classifier.This paper describes and demonstrates a blind classification algorithm that uses hyper-dimensional geometric methods to model steganography-free jpeg images. The geometric model, comprising one or more convex polytopes, hyper-spheres, or hyper-ellipsoids in the attribute space, provides superior anomaly detection compared to previous research. Experimental results show that the classifier detects, on average, 85.4% of Jsteg steganography images with a mean embedding rate of 0.14 bits per pixel, compared to previous research that achieved a mean detection rate of just 65%. Further, the classification algorithm creates models for as many training classes of data as are available, resulting in a hybrid anomaly/signature or signature-only based classifier, which increases Jsteg detection accuracy to 95%.
 
Article
This paper describes how to use JTAG (JTAG: Joint Test Action Group, also called boundary-scan) for producing a forensic image (image: an one-on-one copy of data found on an exhibit) of an embedded system. A JTAG test access port is normally used for testing printed circuit boards or for debugging embedded software. The method described in this paper uses a JTAG test access port to access memory chips directly. By accessing memory chips directly, the risk of changing data in the exhibit is minimized. Also user level passwords can be omitted.
 
Article
All Windows memory analysis techniques depend on the examiner's ability to translate the virtual addresses used by programs and operating system components into the true locations of data in a memory image. In some memory images up to 20% of all the virtual addresses in use point to so called “invalid” pages that cannot be found using a naive method for address translation. This paper explains virtual address translation, enumerates the different states of invalid memory pages, and presents a more robust strategy for address translation. This new method incorporates invalid pages and even the paging file to greatly increase the completeness of the analysis. By using every available page, every part of the buffalo as it were, the examiner can better recreate the state of the machine as it existed at the time of imaging.
 
Article
In order for technical research in digital forensics to progress a cohesive set of electronic forensics characteristics must be specified. To date, although the need for such a framework has been expressed, with a few exceptions, clear unifying characteristics have not been well laid out. We begin the process of formulating a framework for digital forensics research by identifying fundamental properties and abstractions.
 
Logitech Communicate STX. Video recorded in 640 3 480 with the Xvid codec, variable quality. s nat [ 8.5, s ref [ 7.5.
Philips SPC200NC, 352 3 288, Xvid quality 4. A varying amount of frames from the natural video are used to estimate the noise pattern. Uncompressed flatfield videos are used to estimate the reference patterns.
Article
The Photo Response Non-Uniformity is a unique sensor noise pattern that is present in each image or video acquired with a digital camera. In this work a wavelet-based technique used to extract these patterns from digital images is applied to compressed low resolution videos originating mainly from webcams. After recording these videos with a variety of codec and resolution settings, the videos were uploaded to YouTube, a popular internet video sharing website. By comparing the average pattern extracted from these resulting downloaded videos with the average pattern obtained from multiple reference cameras of the same brand and type, it was attempted to identify the source camera. This may be of interest in cases of child abuse or child pornography. Depending on the codec, quality settings and recording resolution, very satisfactory results were obtained.
 
Article
Detecting manipulated images has become an important problem in many domains (including medical imaging, forensics, journalism and scientific publication) largely due to the recent success of image synthesis techniques and the accessibility of image editing software. Many previous signal-processing techniques are concerned about finding forgery through simple transformation (e.g. resizing, rotating, or scaling), yet little attention is given to examining the semantic content of an image, which is the main issue in recent image forgeries. Here, we present a complete workflow for finding the anomalies within images by combining the methods known in computer graphics and artificial intelligence. We first find perceptually meaningful regions using an image segmentation technique and classify these regions based on image statistics. We then use AI common-sense reasoning techniques to find ambiguities and anomalies within an image as well as perform reasoning across a corpus of images to identify a semantically based candidate list of potential fraudulent images. Our method introduces a novel framework for forensic reasoning, which allows detection of image tampering, even with nearly flawless mathematical techniques.
 
Article
Magnetic swipe card technology is used for many purposes including credit, debit, store loyalty, mobile phone top-up and security identification cards. These types of cards and the details contained on them are often relied upon as a form of identification and personal authentication. As such reliance is placed upon them it is surprising that they do not incorporate more stringent security features, and because of this lack of features it is not surprising that they attract the attention of people who wish to exploit them for illegal gain. The paper introduces the type of technology, and range of devices available for manipulating magnetic swipe card data. It proposes the use of Digital Evidence Bags as a suitable format for the evidential storage of information obtained from them, thus further illustrating the flexibility of the format and demonstrating the diverse range of devices that have to be handled within the digital investigation and law enforcement community.
 
– These figures show two attempts to carve an image from Mars that was included in the DFRWS 2006 Challenge. The image on the left was formed by supplying a stream of sectors starting at sector 31,533 to a standard JPEG decompressor; the decompressor generated an error when it attempted to decompress sector 31,761. The image on the right was generated by concatenating sectors 31,533–31,752 and 31,888–32,773 into a single file. This example shows that the JPEG decompressor can be used as an object validator, but that it does not necessarily generate an error when it first encounters invalid information. It is thus necessary to augment decompression errors with additional error conditions – for example, the premature end of a file.  
Article
“File carving” reconstructs files based on their content, rather than using metadata that points to the content. Carving is widely used for forensics and data recovery, but no file carvers can automatically reassemble fragmented files. We survey files from more than 300 hard drives acquired on the secondary market and show that the ability to reassemble fragmented files is an important requirement for forensic work. Next we analyze the file carving problem, arguing that rapid, accurate carving is best performed by a multi-tier decision problem that seeks to quickly validate or discard candidate byte strings – “objects” – from the media to be carved. Validators for the JPEG, Microsoft OLE (MSOLE) and ZIP file formats are discussed. Finally, we show how high speed validators can be used to reassemble fragmented files.
 
Article
This paper describes methods for forensic characterization of physical devices. This is important in verifying the trust and authenticity of data and the device that created it. Current forensic identification techniques for digital cameras, printers, and RF devices are presented. It is also shown how these techniques can fit into a general forensic characterization framework, which can be generalized for use with other devices.
 
Article
The current paper extends the earlier taxonomy framework of the author [Rogers M. Psychology of hackers: steps toward a new taxonomy. Retrieved October 15, 2001 from <http://www.infowar.com/new>; 1999], and provides a preliminary two-dimensional classification model. While there is no one generic profile of a hacker, studies indicate that there are at least eight primary classification variables: Novices (NV), Cyber-Punks (CP), Internals (IN), Petty Thieves (PT), Virus Writers (VW), Old Guard hackers (OG), Professional Criminals (PC), and Information Warriors (IW), and two principal components, skill and motivation. Due to the complex nature and interactions of these variables and principal components, the traditional one-dimensional continuum is inadequate for model testing. A modified circular order circumplex is presented as an alternative method for developing a preliminary hacker taxonomy. The paper discusses the importance of developing an understanding of the hacker community and the various sub-groups. Suggestions for future research are also discussed.
 
e Average of the average accuracy per class (%). 
Article
We have applied the generalised and universal distance measure NCD—Normalised Compression Distance—to the problem of determining the type of file fragments. To enable later comparison of the results, the algorithm was applied to fragments of a publicly available corpus of files. The NCD algorithm in conjunction with the k-nearest-neighbour (k ranging from one to ten) as the classification algorithm was applied to a random selection of circa 3000 512-byte file fragments from 28 different file types. This procedure was then repeated ten times. While the overall accuracy of the n-valued classification only improved the prior probability from approximately 3.5% to circa 32–36%, the classifier reached accuracies of circa 70% for the most successful file types.A prototype of a file fragment classifier was then developed and evaluated on new set of data (from the same corpus). Some circa 3000 fragments were selected at random and the experiment repeated five times. This prototype classifier remained successful at classifying individual file types with accuracies ranging from only slightly lower than 70% for the best class, down to similar accuracies as in the prior experiment.
 
Article
The process of user simply deleting evidence from a computer hard disk will not ensure that it has been permanently removed. It is for this reason that many will go to far greater lengths and use disk-scrubbing tools in an attempt to permanently remove information from storage media. This paper describes an experiment that was carried out to assess the effectiveness of two different disk-scrubbing tools in removing data from a computer hard drive. The results of which are discussed and conclusions made.
 
Article
Current digital forensic text string search tools use match and/or indexing algorithms to search digital evidence at the physical level to locate specific text strings. They are designed to achieve 100% query recall (i.e. find all instances of the text strings). Given the nature of the data set, this leads to an extremely high incidence of hits that are not relevant to investigative objectives. Although Internet search engines suffer similarly, they employ ranking algorithms to present the search results in a more effective and efficient manner from the user's perspective. Current digital forensic text string search tools fail to group and/or order search hits in a manner that appreciably improves the investigator's ability to get to the relevant hits first (or at least more quickly). This research proposes and empirically tests the feasibility and utility of post-retrieval clustering of digital forensic text string search results – specifically by using Kohonen Self-Organizing Maps, a self-organizing neural network approach.This paper is presented as a work-in-progress. A working tool has been developed and experimentation has begun. Findings regarding the feasibility and utility of the proposed approach will be presented at DFRWS 2007, as well as suggestions for follow-on research.
 
Article
Forensics memory analysis has recently gained great attention in cyber forensics community. However, most of the proposals have focused on the extraction of important kernel data structures such as executive objects from the memory. In this paper, we propose a formal approach to analyze the stack memory of process threads to discover a partial execution history of the process. Our approach uses a process logic to model the extracted properties from the stack and then verify these properties against models generated from the program assembly code. The main focus of the paper is on Windows thread stack analysis though the same idea is applicable to other operating systems.
 
Example Windows 2000 artifact template for Windows 2000 operating system
Article
Over the past decade or so, well-understood procedures and methodologies have evolved within computer forensics digital evidence collection. Correspondingly, many organizations such as the HTCIA (High Technology Criminal Investigators Association) and IACIS (International Association of Computer Investigative Specialists) have emphasized disk imaging procedures which ensure reliability, completeness, accuracy, and verifiability of computer disk evidence. The rapidly increasing and changing volume of data within corporate network information systems and personal computers are driving the need to revisit current evidence collection methodologies. These methodologies must evolve to maintain the balance between electronic environmental pressures and legal standards.This paper posits that the current methodology which focuses on collecting entire bit-stream images of original evidence disk is increasing legal and financial risks.1 The first section frames the debate and change drivers for a Risk Sensitive approach to digital evidence collection, which is followed by the current methods of evidence collection along with a cost-benefit analysis. Then the methodology components of the Risk Sensitive approach to collection, and then concludes with a legal and resource risk assessment of this approach. Anticipated legal arguments are explored and countered, as well. The authors suggest an evolved evidence collection methodology which is more responsive to voluminous data cases while balancing the legal requirements for reliability, completeness, accuracy, and verifiability of evidence.
 
Article
In August 2004 at the annual cryptography conference in Santa Barbara, California a group of cryptographers, Xianyan Wang, Dengguo Feng, Xuejia Lai, Hongbo Yu, made the announcement that they had successfully generated two files with different contents that had the same MD5 hash. This paper reviews the announcement and discusses the impact this discovery may have on the use of MD5 hash functions for evidence authentication in the field of computer forensics. (c) 2005 Elsevier Ltd. All rights reserved.
 
e DOSKEY F7 History Option. 
e A Single Command Element. 
Article
Current memory forensic tools concentrate mainly on system-related information like processes and sockets. There is a need for more memory forensic techniques to extract user-entered data retained in various Microsoft Windows applications such as the Windows command prompt. The command history is a prime source of evidence in many intrusions and other computer crimes, revealing important details about an offender’s activities on the subject system. This paper dissects the data structures of the command prompt history and gives forensic practitioners a tool for reconstructing the Windows command history from a Windows XP memory capture. At the same time, this paper demonstrates a methodology that can be generalized to extract user-entered data on other versions of Windows.
 
Article
Most of the voice over IP (VoIP) traffic is encrypted prior to its transmission over the Internet. This makes the identity tracing of perpetrators during forensic investigations a challenging task since conventional speaker recognition techniques are limited to un-encrypted speech communications. In this paper, we propose techniques for speaker identification and verification from encrypted VoIP conversations.Our experimental results show that the proposed techniques can correctly identify the actual speaker for 70–75% of the time among a group of 10 potential suspects. We also achieve more than 10 fold improvement over random guessing in identifying a perpetrator in a group of 20 potential suspects. An equal error rate of 17% in case of speaker verification on the CSLU speaker recognition corpus is achieved.
 
Article
Extracting communities using existing community detection algorithms yields dense sub-networks that are difficult to analyse. Extracting a smaller sample that embodies the relationships of a list of suspects is an important part of the beginning of an investigation. In this paper, we present the efficacy of our shortest paths network search algorithm (SPNSA) that begins with an "algorithm feed", a small subset of nodes of particular interest, and builds an investigative sub-network. The algorithm feed may consist of known criminals or suspects, or persons of influence. This sets our approach apart from existing community detection algorithms. We apply the SPNSA on the Enron Dataset of e-mail communications starting with those convicted of money laundering in relation to the collapse of Enron as the algorithm feed. The algorithm produces sparse and small sub-networks that could feasibly identify a list of persons and relationships to be further investigated. In contrast, we show that identifying sub-networks of interest using either community detection algorithms or a k-Neighbourhood approach produces sub-networks of much larger size and complexity. When the 18 top managers of Enron were used as the algorithm feed, the resulting sub-network identified 4 convicted criminals that were not managers and so not part of the algorithm feed. We also directly tested the SPNSA by removing one of the convicted criminals from the algorithm feed and re-running the algorithm; in 5 out of 9 cases the left out criminal occurred in the resulting sub-network.
 
Article
The process of using automated software has served law enforcement and the courts very well, and experienced detectives and investigators have been able to use their well-developed policing skills, in conjunction with the automated software, so as to provide sound evidence. However, the growth in the computer forensic field has created a demand for new software (or increased functionality to existing software) and a means to verify that this software is truly “forensic” i.e. capable of meeting the requirements of the ‘trier of fact’. In this work, we present a scientific and systemical description of the computer forensic discipline through mapping fundamental functions required in the computer forensic investigation process. Based on the function mapping, we propose a more detailed functionality orientated validation and verification framework of computer forensic tools. We focus this paper on the searching function. We specify the requirements and develop a corresponding reference set to test any tools that possess the searching function.
 
-Respondent demographics
-Zero ordered correlation
-Analysis of variance -extraversion total
-Logistic regression -forward stepwise Wald
Article
The current research study replicated a study by Rogers et al. (Rogers M, Smoak ND, Liu J. Self-reported criminal computer behavior: a big-5, moral choice and manipulative exploitive behavior analysis. Deviant Behavior 2006;27:1–24) and examined the psychological characteristics, moral choice, and exploitive manipulative behaviors of self-reported computer criminals and non-computer criminals. Seventy-seven students enrolled in an information technology program participated in the web-based study. The results of the study indicated that the only significant variable for predicting criminal/deviant computer behavior was extraversion. Those individuals self-reporting criminal computer behavior were significantly more introverted than those reporting no criminal/deviant computer behavior. This finding is contrary to the findings of the previous study. The current study confirmed that the four psychometric instruments were reliable for conducting research in the field of criminal/deviant computer behavior. The impact of the findings on the field of digital forensic investigations is discussed as well as possible reasons for the apparent contradiction between the two studies.
 
Article
The architecture of existing – first generation – computer forensic tools, including the widely used EnCase and FTK products, is rapidly becoming outdated. Tools are not keeping pace with increased complexity and data volumes of modern investigations. This paper discuses the limitations of first generation computer forensic tools. Several metrics for measuring the efficacy and performance of computer forensic tools are introduced. A set of requirements for second generation tools are proposed. A high-level design for a (work in progress) second generation computer forensic analysis system is presented.
 
Article
Computer Forensics is mainly about investigating crime where computers have been involved. There are many tools available to aid the investigator with this task. We have created a prototype of a new type of tool called CyberForensic TimeLab where all evidence is indexed by their time variables and plotted on a timeline. We believed that this way of visualizing the evidence allows the investigators to find coherent evidence faster and more intuitively. We have performed a user test where a group of people has evaluated our prototype tool against a modern commercial computer forensic tool and the results of this preliminary test are very promising. The results show that users completed the task in shorter time, with greater accuracy and with less errors using CyberForensic TimeLab. The subjects also experienced that the prototype were more intuitive to use and that it allowed them to easier locate evidence that was coherent in time.
 
Article
Current digital evidence acquisition tools are effective, but are tested rather than formally proven correct. We assert that the forensics community will benefit in evidentiary ways and the scientific community will benefit in practical ways by moving beyond simple testing of systems to a formal model. To this end, we present a hierarchical model of peripheral input to and output from von Neumann computers, patterned after the Open Systems Interconnection model of networking. The Hadley model categorizes all components of peripheral input and output in terms of data flow; with constructive aspects concentrated in the data flow between primary memory and the computer sides of peripherals' interfaces. The constructive domain of Hadley is eventually expandable to all areas of the I/O hierarchy, allowing for a full view of peripheral input and output and enhancing the forensics community's capabilities to analyze, obtain, and give evidentiary force to data.
 
Article
Evidence concerning times and dates in forensic computing is both important and complex. A case study is outlined in which forensic investigators were wrongly accused of tampering with computer evidence when a defence expert misinterpreted time stamps. Time structures and their use in Microsoft Internet Explorer are discussed together with local and UTC time translation issues. A checklist for examiners when producing time evidence is suggested underlining the need for the examiner to fully understand the meaning of the data that they are seeking to interpret before reaching critical conclusions.
 
Article
Database Forensics is an important topic that has received hardly any research attention. This paper starts from the premise that this lack of research is due to the inherent complexity of databases that are not fully understood in a forensic context yet. The paper considers the relevant differences between file systems and databases and then transfers concepts of File System Forensics to Database Forensics. It is found that databases are inherently multidimensional from a forensic perspective. A notation is introduced to express the meaning of various possible forensic queries within this multidimensional context. It is posited that this notation, with the multidimensional nature of databases as described, forms a map for possible Database Forensics research projects.
 
Article
Many organizations are failing to protect the digital evidence required to prosecute or even reprimand an individual. Digital investigators are responsible for ensuring that evidence is reliable enough to be admissible in court or to be useful in corporate disciplinary or termination proceedings. This article looks at some basic guidelines to make sure the evidence is protected and that notes made at the time are professional. To be useful, the evidence must also be presented in a comprehensible fashion. This article provides recommendations for report writing.
 
Top-cited authors
Simson L. Garfinkel
  • National Institute of Standards and Technology
Vassil Roussev
  • University of New Orleans
Mourad Debbabi
  • Concordia University Montreal
Ibrahim Baggili
  • University of New Haven
Kim-Kwang Raymond Choo
  • University of Texas at San Antonio