Figure 1 - uploaded by Abbas Cheddad
Content may be subject to copyright.
An 8-bit (1 byte) representation with the conventional integer to binary conversion. It is clear that choosing the right index for embedding is very crucial. This intricacy is less severe when using the RGC since it produces seemingly disordered decimal-to-binary representation. 

An 8-bit (1 byte) representation with the conventional integer to binary conversion. It is clear that choosing the right index for embedding is very crucial. This intricacy is less severe when using the RGC since it produces seemingly disordered decimal-to-binary representation. 

Source publication
Conference Paper
Full-text available
The recent digital revolution has facilitated communication, data portability and on-the-fly manipulation. Unfortunately, this has brought along some critical security vulnerabilities that put digital documents at risk. The problem is in the security mechanism adopted to secure these documents by means of encrypted passwords; however, this security...

Context in source publication

Context 1
... algorithm using High and Low pass filters. In our case we compute four filters associated with the orthogonal or bi-orthogonal of the Haar wavelet. We choose Wavelet over DCT (Discrete Cosine Transform) because [3]: the Wavelet transform understands the Human Vision System (HVS) more closely than does DCT; Visual artefacts introduced by wavelet coded images are less evident compared to DCT because the wavelet transform does not decompose the image into blocks for processing. DFT (Discrete Fourier Transform) and DCT are full frame transforms. Hence any change in the transform coefficients affects the entire image except if DCT is implemented using a block based approach. However DWT has spatial frequency locality, which means if the signal is embedded it will affect the image locally. Hence a wavelet transform provides both frequency and spatial description for an image. More helpful to information hiding, the wavelet transform clearly separates high-frequency and low- frequency information on a pixel-by-pixel basis [4]. We refer to data that we wish to embed as payload, herein the image itself. Since we need means for protecting scanned documents against forgery it is essential that the payload will carry as much information from the host (cover image) as possible. There is a trade-off between perceptual visualization and space demand for embedding (usually measured in bits). Without taking compression into account, the payload can be consistent with the cover signal; therefore, if the cover is stored as an 8-bit unsigned integer type then the payload will require 8 templates when applying the one bit substitution method. There is a high payload Steganography approach called A Block Complexity Data Embedding (ABCDE) [5], but it is prone to statistical attacks as it acts in the spatial domain; moreover it cannot resist any kind of manipulation to the Stego-image (image having embedding data). An approximation of the cover document can be achieved through applying the gray threshold technique which results in a binary image demanding only 1 bit per pixel for storage. Some authors suggest using an edged image instead as it approximates the cover better. In the search for the best way to represent the cover image with the least bit requirement for embedding we identified dithering as our ultimate pre-processing step which is the foremost task in building our system. Dithering is a process by which a digital image with a finite number of gray levels is made to appear as a continuous-tone image [6]. Despite, in all versions, each pixel takes on only one bit, it is apparent that the way dithering quantizes image pixels contributes a lot to the final quality of data approximation. We observed that thresholding performs better in text based documents, while in capturing graphics it is proven to be a poor performer compared to dithering. Therefore, since our aim is to produce a general workable prototype we have to take into consideration the presence of both text and graphics; subsequently we opt to use dithering. Manipulating coefficients in the wavelet domain tends to be less sensitive unlike other transformations such as DCT and FFT. There are two methods to convert decimal integer to a binary string: one is to use the conventional decimal to binary conversion and the other is termed The Binary Reflected Gray Code (BRGC) 1 . This binary mapping is the key to the augmented embedding capacity introduced by ABCDE algorithm. There is a trade-off between robustness and distortion which is summarized in Figure 1. A trade-off occurs during our algorithm’s formulation which is due to the different levels wavelets can have. The lower we go the more robust we get but with less capacity for embedding. For example if the cover image is of size 255x255 (8-bit grayscale) we obtain 16384 bits to embed in the first level, the second level will reduce the image dimensions by a factor of two to yield 4096 bits and so on. In some cases the inverse transform in the wavelet domain truncates some values that fall larger or lower than the allowed limits in 8-bit type of images, the truncation occurs because of the introduction of “non-natural bits” coming from the secret message while embedding. To cope with this rare problem we choose to transform the RGB image into the YCbCr colour space prior to feeding it into DWT where we embed in the chrominance red channel (Cr). This step ensures that there will not be any data lost. Our proposed design is illustrated in Figure ...

Similar publications

Article
Full-text available
Steganography is the technology to communicate the information secretly in an appropriate carrier i.e text, image, audio and video files. Under this assumption, the objective is to conceal to the existence of the embedded data. Steganography helps to maintain the confidentiality and security of transmitted information in an unprotected transmission...

Citations

... Regarding document forgery, Khanna et al. (2008); Elkasrawi and Shafait (2014); Abed (2015) attempt to identify the scanned and the printed version of a PDF document based on the use of texture analysis. With the use of cryptography, the counterfeit is identified by mechanisms that avoid escaping signatures linked to the document (Perry et al., 2000;Picard et al., 2004;Cheddad et al., 2008;Schouten and Jacobs, 2009;Ibrahim et al., 2010;Yang, 2014;Tchakounté et al., 2020b;Tchakounte et al., 2021). Research attempts have been proposed for the recognition of falsification with artificial intelligence based on imaged features extraction (Van Beusekom et al., 2013;Patgar and Vasudev, 2013;Bertrand et al., 2013;2015;Patgar et al., 2014;Abramova, 2016;Laptev et al., 2017). ...
... Most of the documents are in digital format now, they will move us to the paperless workspace. This situation comes with the problem of security breaches when transmitting documents digitally over the network [2]. ...
... The intrinsic random imperfections generated make the sheet almost unique, and under certain conditions it is possible to extract a proper fingerprint. The massive demand of robust identification methods in many contexts [1][2][3][4][5][6], makes fingerprint extraction from a sheet of paper an attractive and challenging research topic. Investigative scenarios in the forensic field [7,8], could gain several advantages from the availability of such a fingerprint. ...
Article
Full-text available
The identification of printed materials is a critical and challenging issue for security purposes, especially when it comes to documents such as banknotes, tickets, or rare collectable cards: eligible targets for ad hoc forgery. State-of-the-art methods require expensive and specific industrial equipment, while a low-cost, fast, and reliable solution for document identification is increasingly needed in many contexts. This paper presents a method to generate a robust fingerprint, by the extraction of translucent patterns from paper sheets, and exploiting the peculiarities of binary pattern descriptors. A final descriptor is generated by employing a block-based solution followed by principal component analysis (PCA), to reduce the overall data to be processed. To validate the robustness of the proposed method, a novel dataset was created and recognition tests were performed under both ideal and noisy conditions.
... In [22], a framework to identify fraud on checked archives based on covering up procedures is proposed. Steganography is one of the data covering up procedures; it is the science of imperceptible implanting of data in a computerized medium. ...
... It may fail to detect image forgery. [22] Steganograp hy Efficiency of 72.0% was achieved by this system in detecting forgery identity documents. ...
Article
Full-text available
Discovery of fraudulent documents as of late has ended up crucial as techniques to form these fake documents are getting to be broadly open and basic to utilize; indeed, for an untrained person. A promptly and effortlessly available gigantic writing that contains basic data on recognizable proof cards, birth certificates, or identifications has continuously exacerbated the issue. This promptly accessible writing portrays various strategies and procedures that favorably bolster the making of fake IDs and documents. This overview paper investigates various strategies for countering archive fraud dangers in conjunction with their accomplishments and restrictions. At the final of the paper, there's a table that compares these strategies. Novel and more successful procedures and strategies are required to resolve and control the threats and dangers of IDs and reports imitation.
... Such forgeries can put ones' identity, wealth and life in risk by impersonation [36]. Several researchers have proposed methods to combat digital document forgery which is not only useful for physical document security but also taking considerations that the personal data of document holder remains private and does not fall into the hands of a wrong person [4,13,16,30]. However, every method proposed has its own advantages as well as disadvantages. ...
Article
Full-text available
The privacy and security of identity documents like Passports, PAN cards, Driving License as well as other important personal documents like academic degree certificates are now at an all-time high, given how easy and cheap the new technologies make it to produce forged documents which not just carry threat to an individual’s respect in society or career aspirations but can also prove to be life threatening if they are placed in the wrong hands. Thus, it is very important to have mechanisms that prevent or rather make it computationally unfeasible to forge these important documents. In this paper, we present an extensive review of techniques aimed at making tamper resistant physical documents, published across last two decades. We provide an in-depth classification of the means used for securing documents in the existing literature, discuss their limitations, open areas and future insights to address the open issues and challenges.
... This process introduces random imperfections, which makes the paper sheet unique and allow to build a fingerprint. The massive demand of robust authentication methods in many context [1,2,3,4,5], makes the fingerprint extraction an attractive and challenging problem. Although several techniques have been proposed, most of them require industrial and expensive devices. ...
... LBP H r,n (I) = LBP r,n ({patch k (I, L))k ∈ [1..p]}) (1) In order to reduce histograms length and to introduce rotational invariance, uniform binary patterns have been used. Given two paper sheet images I 1 , I 2 for which the contents can be described by LBPs computed as in 1, the authenticity 1 iplab.dmi.unict.it/paperFingerprint ...
... Abbas Cheddan and et al [22], presented a method for scanned documents to detect the forgery, using information hiding technique, Steganography, which is the science that embeds data in digital medium in an invisible way. They also proposed an efficient, highly secure and robust method against different image processing attacks. ...
Conference Paper
Full-text available
Abstract— Identity Document (ID) forgery is a process by which authorized identity document is modified and\or copied by unauthorized party or parties to be used illegally. The rapid development of personal computers, scanners, and color printers a long with their affordability raises up the risk of identity document forgery. They represent rich tools and techniques that highly support the creation of faked identities. This review paper investigates the current techniques for countering document forgery threats long with their achievements and limitations. These techniques are insufficient to mitigate ID forgery threats. New methods and techniques need to be developed to reduce or eliminate the risk of the Identity documents forgery.
... R, G, B, have been enhanced equally. These kind of situations are quite scarce when considering image splicing forgery (cut and paste), according to [5]. Although, their algorithm is tested and validated under both forensic and counter forensic conditions, it does not work well with images that deploy JPEG compression. ...
Chapter
With the rise of digitization, documents are being created, modified, and shared digitally. Unlike hard copies, it is difficult to ascertain the authenticity of these digital documents. Thus, there is a need to authenticate and verify digital documents in an efficient manner. In this chapter, a decentralized application has been proposed, which uses a smart contract to facilitate the authentication and verification of documents by leveraging the blockchain technology. In contrast to the traditional way of storing the entire input digital document, this approach creates a unique fingerprint of every input document by using a cryptographic hash function. This fingerprint is stored on the blockchain network to verify the document in future. This blockchain based solution can be used by organizations to authenticate the documents that they generate and allow other entities to verify them.