TRLG: Fragile blind quad watermarking for image tamper detection and recovery by providing compact digests with quality optimized using LWT and GA
In this paper, an efficient fragile blind quad watermarking scheme for image tamper detection and recovery based on lifting wavelet transform and genetic algorithm is proposed. TRLG generates four compact digests with super quality based on lifting wavelet transform and halftoning technique by distinguishing the types of image blocks. In other words, for each 22 non-overlap blocks, four chances for recovering destroyed blocks are considered. A special parameter estimation technique based on genetic algorithm is performed to improve and optimize the quality of digests and watermarked image. Furthermore, CCS map is used to determine the mapping block for embedding information, encrypting and confusing the embedded information. In order to improve the recovery rate, Mirror-aside and Partner-block are proposed. The experiments that have been conducted to evaluate the performance of TRLG proved the superiority in terms of quality of the watermarked and recovered image, tamper localization and security compared with state-of-the-art methods. The results indicate that the PSNR and SSIM of the watermarked image are about 46 dB and approximately one, respectively. Also, the mean of PSNR and SSIM of several recovered images which has been destroyed about 90% is reached to 24 dB and 0.86, respectively.
keywords:Data hiding, Watermarking, Tamper detection and recovery, Texture analysis, Genetic algorithm, Lifting wavelet transform.
In todayâs digital world, the communication networks like the Internet technology has rapidly developed and extended as a suitable channel for transferring all kinds of data, particularly multimedia data. Since the amount of multimedia data in Internet transmission increases extensively, the problem of copyright and integrity protection has become a very serious issue and arisen more attention nowadays ref11 (); ref8 (). In other words, digital data can be easily copied or maliciously tampered by using various tools without loss of quality and perceived by the human visual system. Therefore, the secure strategies should be designed for solving these challenges. Among the solutions for these issues, digital watermarking techniques ref11 (); ref8 (); ref10 () are the most popular until now.
Digital watermarking is a science and art that imperceptibly hides useful information into the digital media for various goals such as copyright protection ref26 (); ref27 (); ref29 (); ref30 (), broadcast monitoring, authentication and etc ref11 (); ref8 (); ref10 (). In recent years, watermarking has attracted much attention as an effective solution to guaranty the integrity and authenticity of digital images from being illegally modified ref9 (); ref1 (). To do so, the watermarks information include authentication pattern and digest are embedded into host image without severely affecting the perceptual quality to detect and recover tampered regions in the receiver side. It should be noted, the host image is referred as the original image without the embedded watermark, while the image that is obtained by embedding the watermark into host image without serious destroying the quality is named as the watermarked image. Generally, these methods can be classified into two categories as fragile and semi-fragile techniques ref8 (); ref1 (). The fragile scheme makes the hidden information invalid after any modifications in the content of the watermarked image. In another word, it can be introduced as a design of watermarks that become undetectable in the view of the slightest modification to the host signal. Therefore, fragile schemes are mainly used for authentication goals ref7 (); ref12 (); ref13 (); ref14 (); ref15 (); ref16 (); ref17 (); ref18 (); ref19 (); ref20 (); ref21 (); ref22 (); ref23 (). On the other hand, a semi-fragile technique aims at making hidden information fragile to modify the content of the signal, and robust to all possible attacks such as compression, image processing operations and etc.
The fragile watermarking has its own specific requirements include increasing imperceptibility, capacity, and security ref8 (); ref9 (). The imperceptibility denotes the idea that an embedded watermark must be invisible to the human visual system. In other words, the embedded information should keep the imagesâ visual quality. The size of information which embedded in the host is presented by capacity. Finally, the security of the watermarking system is referred as the safety of embedded watermark into the host, even though the hacker having full knowledge of embedding and detecting procedures. In this filed, security has become one of the most important and challenging problems for watermarking schemes.
1.1 Literature review
In this subsection, a brief review of several fragile schemes which proposed in the last decade is presented. Also, the advantages and weaknesses of each method are described, and compared with each other. Nowadays, the fragile watermarking authentication schemes have been extended and developed extremely. These methods can be divided into two types. Some methods only focused on locating the suspicious regions in host image ref23 (); ref15 (); ref18 (); ref32 (). On the contrary, more schemes can recovered tampered parts using the information which embedded in non-tampered information, clearly ref7 (); ref12 (); ref13 (); ref14 (); ref16 (); ref17 (); ref19 (); ref20 (); ref21 (); ref22 ().
In ref7 (), an effective dual watermark method for image tamper detection and recovery was proposed. In this scheme, two chances are provided in the entire image to recover tampered regions for the first time. Consequently, the recovery rate and quality of the recovered image are efficiently optimized rather than previous methods. In addition, the hierarchical authentication is employed to detect the tampered regions. In ref15 (), a probability-based tampering detection scheme for digital images was presented to reduce errors in the authentication phase. In another word, a probability theory is used to enhance authentication accuracy. The experimental results show that the proposed scheme provides good accuracy in terms of detection precision. In ref32 (), an image authentication scheme based on absolute moment block truncation coding was proposed. In this scheme, a hybrid mechanism was employed to hide the authentication watermark using AMBTC and improve the embedding efficiency. In embedding phase, the watermark is embedded into Bitmap or quantization levels based on the texture of blocks. The experimental results illustrate that the scheme can effectively thwart collage attack. Another self-embedding fragile watermarking scheme was presented in ref13 (), as a novel image tamper localization and recovery algorithm based on watermarking technology. The security of this method has been increased by using non-linear chaotic sequence. In order to generate the digest, DCT is applied in coefficients of each 22 block and embedded into another block according to the block mapping. A novel chaos-based fragile watermarking for image tampering detection and self-recovery was presented in ref12 (). In this scheme, to determine blocks mapping, a new chaotic sequence generator as the cross chaotic map is employed. Hence, the security is increased due to the application of this map with many parameters, which can be used as keys. Similarly, two chances are considered to recover 22 modified blocks. An effective Singular Value Decomposition based image tampering detection and self-recovery using active watermarking were proposed in ref14 (). In this method, 12-bit tamper detection data were generated and embedded in a random block after being encrypted. One of the positive aspects of the proposed scheme to the previous schemes is the ability to detect tampered region under various security attacks include vector-quantization and collage attacks.
In ref16 (), authors presented an efficient fragile watermarking scheme for image authentication and restoration based on Discrete Cosine Transform. In this scheme, the host is divided into 22 non-overlapping blocks. Similar to most schemes, for each block 12 bits watermark is generated from the five Most Significant Bits of each pixel and are embedded into the three Least Significant Bits of the pixels corresponding to the mapped block. In addition, the proposed scheme uses two levels encoding for content correction bits generation. In ref19 (), an image tamper detection and recovery scheme using adaptive embedding rules was presented. One of the major novelty of this method is used smoothness to distinguish the characteristics of image blocks. Accordingly, the different watermark embedding, tamper detection, and recovery strategies were designed and applied to different block types. Hence, information of authentication and recovery can be effectively embedded in a limited space to increase information hiding efficiency. Experimental result showed that the proposed scheme causes less damage to the original image compared to the most fragile scheme. In the scheme ref20 (), a DCT based effective self-embedding watermarking scheme for image tamper detection and localization with recovery capability was presented. In this scheme, as most schemes for each 22 non-overlapping block, two authentication, and ten recovery bits are generated from the five Most Significant Bits of pixels. The experimental results illustrate that the proposed scheme not only outperforms high-quality restoration, also removes the blocking artifacts. The authors of scheme ref18 (), proposed image tamper detection scheme based on fragile watermarking and Faber-Schauder wavelet. The maximum coefficients of FSDWT are utilized with a logo to generate the watermark which is embedded in the Least Significant Bit of specified pixels in the host. In ref23 (), a novel efficient reversible image authentication method using improved PVO and LSB substitution techniques was presented. In this scheme, instead of embedding the block-independent AC as the previous work, the proposed scheme embedded the hashed value of block features. In addition, a mechanism to deal with the overflow and underflow problems was considered. The proposed scheme ref18 (); ref23 () achieved high image quality, low complexity computing, but the main drawback of this method is the inability to recover tempered regions.
Another scheme as improved image tamper localization using chaotic maps and self-recovery was proposed in ref17 (). In this scheme, the authentication bits of a 22 image block is generated using the chaotic maps. Thereinafter, for each non-overlapping block, two different sets of recovery bits of length 5 and 3 were computed and each one is embedded into randomly selected distinct blocks. In ref21 (), a new fragile image watermarking with pixel-wise recovery based on overlapping embedding strategy was presented. In this work, the block-wise mechanism for tampering localization, and the pixel-wise mechanism for content recovery are considered. Compared to other methods, the proposed scheme can achieve superior performance of tampering recovery even for larger tampering rates. To achieve better performance of tampering recovery, authors in ref22 () proposed hierarchical recovery for tampered images based on watermark self-embedding. In this scheme, the higher MSB layers of tampered parts have higher priority to be corrected than the lower MSB layers. Hence, the quality of the recovered image can be improved, especially for larger tampering rates. Experimental results demonstrate the effectiveness and superiority of the proposed scheme compared to previous methods.
In ref1 (), a fragile and blind dual watermarking for image tamper detection and self-recovery based on Lifting Wavelet Transform and halftoning technique was proposed. In order to improve quality of the recovered image, two chances are provided by embedding a novel LWT-based digest and halftone version. In addition, to enhance the quality of the LWT-based digest, a new LSB technique was proposed. Experimental results prove the effectiveness, imperceptibility and real-time requirement of TRLH compared to another scheme which reviewed until now, especially in term of quality of watermarked and recovered image and security. In addition, TRLH not only outperforms high-quality restoration effectively but also removes the blocking artifacts and increase the accuracy of tamper localization due to use of very small size blocks.
Totally, the fragile methods which proposed in recent years have low visual quality for watermarked and recovered image; Also, the low recovery rate under large tampering, weak localization, and poor security was observed. The most schemes have a severe security threat because of the independence between content and the watermark. In addition, the mostly schemes which proposed in recent years are vulnerable against vector quantization, collage and protocol attacks.
1.2 Key contributions of TRLG
In this paper, in order to perform better performance of visual quality for both watermarked and recovered image, and also improve security and overcome the mentioned challenges, an efficient fragile blind quad watermarking scheme for image tamper detection and recovery based on Lifting Wavelet Transform (LWT) and Genetic Algorithm (GA) is proposed. TRLG provides interesting extensions to the most important limitations of some of the previous state-of-the-art schemes.
In TRLG, the digests classified into two categorize as primary and secondary digests. The two primary digests are generated based on LWT, and the rest two secondary digests are obtained by Floyd kernel of halftoning techniques. LWT (Haar, integer) ref31 (); ref33 () is used because this transform uses the integer coefficients and has less computational time and memory requirement than traditional wavelet. In order to improve and optimize the quality of primary digests GA ref4 (); ref3 () is employed. The utilizing GA avoids the exhausting searching and allows us to intelligently classify blocks of the image in terms of texture into flat or rough. Experimental results will show that the generated digest have better quality and decrease blocky artifact for recovered image compared to traditional digests which achieved based on averaging pixels, DCT-based or MSBs planes.
Furthermore, In TRLG, to increase recovery rate and guaranty quality of recover image more and more, a novel mapping strategy for shuffling four digests is considered. Based on this technique, the coefficients of each digest is embedded in host image, so that the maximum distance between the coefficients of other digest and the initial position of original values is achieved. In TRLG, to enhance the security and raise detection accuracy a new chaotic map as CCS has been used. The irregular outputs are used to shuffle digest and improve the security of watermark. During watermark bits embedding process, first, authentication bits and digest are combined to form the watermark data in the LSBs by using LSB matching. Next, In order to avoid special tamperings such as vector-quantization, collage-attack, and protocol attack, the watermark is encrypted and permuted per block. In this way, a small non-overlapping block sized 22 is used to improve the accuracy of localization.
Moreover, In TRLG, to improve quality of the watermarked image, and also improve the security of watermarks, the embedded watermark in each block of the image is encrypted with a key that intelligently selected with GA. In other words, GA is applied to intelligently optimum and modified the watermarkâs values of each block to decrease the difference between watermarked and original values, and also achieve the high level of security. Generally, applying optimization algorithms into watermarking techniques is practical and effective. Experimental results of other state-of-the-art methods are compared with TRLG, and it is revealed that the proposed scheme exhibits excellent quality for watermarked and recovered image, and as well as improve security.
Generally, TRLG makes three main contributions. First, to the best knowledge of the authors, this is the first work that generating compact digests with quality optimization. Also, it is the first time that provides more than two chances for recover tampered regions. Second, in TRLG which is fragile scheme, generating digest and embedding watermark are modeled as a search and optimization problem. Third, combining chaotic maps, and utilized various keys to enhance the security of the watermarking system.
1.3 Road map
The remainder of this paper is organized as follows: Section 2 briefly explains some background material for TRLG. In Section 3, the design and implementation of TRLG are described in detail. Next, the experimental evaluation scenario and details of comparison with the fragile state-of-the-art methods are described in Section 4. Finally, the conclusion and future scope of TRLG is found in Section 5.
In this section, some background material for the subsequent section is presented. First, the Chebyshev-Chebyshev (CCS) chaotic map is introduced. Next, a brief review of the Genetic algorithm (GA) is described. Finally, a new inverse halftoning method is presented.
2.1 Chebyshev-Chebyshev map (CCS)
The chaotic maps are the simple and efficient technique that is utilized in watermarking schemes for shuffling and encrypting the watermark.
The Logistic map is one of the popular and simplest 1D chaotic map which is used in this field, widely. The random sequence of this map is generated by Eq. (1):
where and is control parameter and initial value of map, respectively. This map has two main drawbacks: firstly, Its chaotic range is limited [3.57, 4], and secondly, beyond the range cannot generate chaotic behaviors ref2 ().
where and are control parameters, and is the initial value of sequence. The chaotic performance of CCS is much better than single map.
2.2 Genetic algorithm
Genetic algorithm (GA) is one of the famous optimization tools in artificial intelligent that introduced by Holland ref4 (); ref3 (). It is a heuristic searching algorithm based on the mechanism of natural selection and genetics that find the best global minimum or maximum solutions in large space. The optimization problem based on GA is modeled by defining the chromosome, fitness function, and three main operators such as selection, crossover, and mutation. The whole steps of GA are shown in Fig. 1.
The process is started with an initial population of chromosomes that represent the variables of the problem by an encoded binary string. The initial population is selected randomly from sets of possible solutions. The binary strings are adjusted to maximize or minimize the fitness values. To do so, a fitness function is utilized to measure the quality of each chromosome in the population. It should be noted, the fitness function should be carefully selected based on the requirement of the optimization problem. Next, GA tries to produce further possible solutions to achieve the desired optimization. In the other word, the next generation will be generated from a particular group of chromosomes to survive whose fitness values are high. Hence, three genetic operators are triggered to recombine the composition of the genes to create new chromosomes over successive generations. A brief summary for these basic operators can be summarized as follows:
Selection: In this step, the portion of fitter chromosomes are selected to generate new population, similar to the natural world. The chromosome that holds higher fitness value, subsequently, have the high chance to be survived. In another word, a part of the low fitness chromosomes is ignored through this natural selection step.
Crossover: In this step, pairs of optimal chromosomes among the survived chromosomes are chosen as parents to produce two new children. Evidently, the chromosomes with the higher fitness values generate more children. To do so, a crossover point is selected between the first and last chromosomes of the parent chromosomes. Next, two new children are generated by swapping the fraction of each chromosome after the crossover point.
Mutation: Finally, to avoid GA get trapped on a local optimum and keeps GA from converging fast, the mutation operator is employed. To do so, some random positions of the chromosomes are flipped by changing 0 to 1 and vice versa.
At the end, the GA period is repeated until the desired termination criterion is satisfied or the number of iteration is reached.
2.3 Halftone technique
Digital halftoning is a technique to generate halftone version of the image by homogeneously distributed of the black and white pixel from continues tone ref28 (); ref25 (); ref24 (). In order to generate halftone version of the image, a Floyd kernel (Filter) is chosen ref33 (). This kernel is illustrated in Eq. (3):
where represent current pixel.
One of the major applications of halftone technique is inverse halftoning. In this process, a halftone version of the image is used to reconstruct the continues tone version of the image. Noways, several methods have been proposed for this aim, but most of them have low quality for inverse version compared to the original. In TRLG, a novel and effective inverse halftoning technique base on Deep Convolution Neural Network that proposed in ref5 () is utilized. In order to map a halftone version of the image to continues tone, a deep CNN as a nonlinear transform form is used. For this aim, a pre-trained deep CNN as a feature extractor is employed to construct the objective function for the training of the transformation CNN. The experimental results illustrate that it can create the inverse halftoned image with high image quality, compared to WInHD ref6 () which is used in ref1 (). For more information about the process of generating halftone version and the inverse method, refer to ref5 ().
3 Proposed method
In this section, a fragile blind quad watermarking for image tamper detection and recovery by providing compact digests with quality optimized based on Lifting Wavelet Transform (LWT) and Genetic Algorithm (GA) is proposed. TRLG includes two main phases that are described below in details:
Generating and embedding watermark: In this phase, First, four digests are generated based on LWT and Floyd kernel. In the following, to improve recovery rate and increase security, each digest is shuffled and arranged separately by using a new chaotic map. Then, an authentication bit for each 22 blocks is calculated based on the relation of pixels of the block and the digest that must be embedded in it. Finally, to form and embed watermark, first, digests and authentication bit should be combined, and then encrypting and embedding the watermark by using chaotic map, GA, and modified LSB-matching technique. The block diagram of this phase is shown in Fig. 4.
Tamper detection and recovery: In this phase, to analyze the integrity of watermarked image received from the communication channels, first, the watermark is extracted and decrypted. Next, the tamper regions are marked based on the extracted and calculated authentication bit. Finally, four digests are reshuffled and reconstructed to recover tampered regions by valid parts of them. Fig. 5 illustrates the block diagram of tamper detection and recovery phase.
3.1 Generating and embedding watermark
Let’s denoted the cover image as with the size of (divisible by 4). TRLG is able to detect and recover 22 modified blocks. Also,  and  represent the color component of in RGB and YUV color spaces, respectively. If is in grayscale mode, chrominance components are meaningless and further processing is not needed for them. The procedure of generating and embedding watermark is described in details as below:
3.1.1 Generating digests
As mentioned before, four digests are considered in TRLG to recover tampered regions. These digests are classified as primary and secondary digests. The two primary digests are generated based on LWT and GA. Also, the two secondary digests are obtained by using the halftoning technique.
The steps of generating primary digest are as follows:
The component is resized to 50% of original size, and a level of LWT is applied on the result to generate , , and bands.
Quantizing coefficients of each band by Eq. (4):
where and are quantization step and coefficients of wavelet bands, respectively.
In this step, a texture analysis on each block is performed to intelligently generate the digest for any image with the various type. Hence, the type of each 44 blocks of is classified into two classes as texture and flat region based on Standard Deviation (STD) measure and GA. For this aim, first, STD is applied in and denote result as . Next, the optimal thresholds for separating blocks are obtained based on GA. The details of GA training will further explain in the Thresholds Optimization sub-section. At the end of GA training, a threshold matrix where denoted as is obtained. Then, the type of each block is marked as texture or flat region by Eq. (5):
where ( [0, 1]) is illustrated the type of each block as two classes.
In this step, the coefficients of each bands are modified according to Eq. (6):
where is the parameter of technique which is proposed in TRLH ref1 (). Based on this technique, the difference between two corresponding coefficient is reduced, and this leads to increase the quality of image digest. If two and three LSBs of coefficient must be ignored are zeros, is set as 2 and 4, respectively. Totally, 20 bits (19 bits for describing coefficient of each band, and 1 bit for described the type of corresponded block) is obtained that represent of digest.
If is in grayscale mode, and components are resized to 25% of original size. Then, the values of components are modified and updated by Eq. (7):
where is bitwise right shift operation.
Totally, 14 bits is obtained which described the 44 blocks of chrominance components.
Finally, 34 bits (20 bits for gray) are considered to represent 44 blocks of gray or color images. It should be noted, although the size of the block is 44, in the inverse procedure, TRLG can recover each block with 22 precision. In another word, the digest which is proposed in TRLG has amazing quality rather than traditional method which are based on 22 blocks or larger. This claim will be proved in Sec. 4.
At the end, lets define the result as which include . A novel primary digest which proposed in TRLG is named as DLG.
Thereinafter, to generate the secondary digest, the Floyd kernel is used. To do so, first, , and are resized to 50% of original size, and Floyd kernel is applied in each band, separately. Totally, for each 22 blocks, 3 bits (a bit for gray) halftone is considered. Let define the result as .
Thresholds Optimization: As can be seen, the threshold step is playing important role in DLG algorithm. In another word, the key challenge is how to classify block in terms of texture to achieve efficient digest with well quality. Therefore, to guarantee the quality of generated digest and select optimal thresholds, GA which is a well known modern optimization algorithm is employed. To do so, is divided into non-overlap blocks of size (128128). The overall GA-based generating digest is summarized in three steps:
First, the initial thresholds population is randomly created and converted into chromosomes. Next, the digest of the current block is generated by using the solutions in population.
The fitness function is evaluated between the current block of and reconstructed primary digest that belongs to it for each corresponding solution by Eq. (8):
where SSIM is the Structure Similarity Index.
Finally, GA operators include selection, crossover, and mutation are applied on each chromosome to generate next generation.
These steps are continued for all blocks until a predefined condition is satisfied, or a constant number of generations is exceeded. Finally, the optimal thresholds are denoted as . At the end, is resized to the original size of .
3.1.2 Scrambling digests
As mentioned above, in TRLG, four digests are considered for tampering recovery. Therefore, the four schemes are designed to shuffle and place each part of four digests in the maximum possible distance from the original place in , and other corresponded parts in rest digests. By using these strategies, the security and recovery rate will be increased and improved. Accordingly, if more than half of is manipulated, TRLG is able to recover tampered region efficiently. In Sec. 4, we will see, TRLG can efficiently recover tampering part with amazing quality when the watermarked image is manipulated under 80% rate.
First of all, the coefficients of primary digest is shuffled to improve security by using novel chaotic map discussed in Sec. 2.1. The shuffling steps of primary digests are explained below in details:
Two copy of are taken, and named them as and .
The permutation position matrix and is achieved by sorting the and in ascending order.
Each plan of and which is generated in previous section  are converted into 1D matrix as:
In this step, the shuffled digest pixel matrix and are achieved by utilizing Eq. (9):
Convert the and to 2D matrix with size of .
In the following, to improve recovery rate a Shift-aside technique ref1 () is utilized for reordering the coefficients of , again. Accordingly, if the right or the left half side of totally tampered, the recovery phase is able to recover the tamper region which embedded in another side of . In addition, each side is divided into two separate parts again, that makes the recovery phase more efficient when the tampered region is located at the center of . It should be noted, these processes are applied on all plane in include , and finally, will be updated.
In order to reorder the coefficients of , a new technique is proposed in TRLG as Mirror-aside operation. In Mirror-aside scheme, the coefficient of top and bottom half of the digests are swapped. To do so, first, is divided into four non-overlap blocks. Next, each block is divided into four non-overlap blocks, again. Fig. 2(a) is illustrated the dividing process. As Shift-aside scheme, the determined location by CCS is reordered to placed into the corresponded quarter. Finally, the reordered is formed and updated.
Subsequently, to reorder the coefficients (Bits) of the secondary digest, first, two copy of are generated, as and . Next, the coefficients of these digest are reordered according to Fig. 2(b) and Fig. 2(c), respectively. Let named this strategy as Partner-block. Unlike primary digests, the secondary digests are not shuffled by any chaotic map. At the end, and are achieved. As seen, the shuffling and reordering schemes for all digest in TRLG are designed to achieve maximum recovery rate in large tampering rate.
3.1.3 Generating authentication bits
In TRLG, a bit is considered for authenticating each 22 block. The process of generating authentication bit are explained below in details:
In first step, each band of and which include  are converted into binary form. Next, the result are combined by Eq. (10):
where and are represented index of each band in primary digest and string joint operator, respectively. In this equation, is a binary matrix with size of that each cell contains 68 bits which belong to the information of two primary digests.
The bits of primary digests are formed to place into considered position in 44 block according Fig. 3 as:
where is expressed the primary bits in each planes.
Next, the primary bits which must be embedded in each blocks in all planes are denoted as:
where and are represented block, and its inner sub-block, respectively, and is calculated by Eq. (3):
where and are represented, th sub-block in and string joint operator, respectively.
At the end, with size of is generated as:
where each element in contains 17 bits (or 5 bit for gray image) that belong to data of primary digests.
3.1.4 Combining watermarks bits
After generating and shuffling primary and secondary digests, and also computing authentication bits, all bits are organized to be ready for embedding in . In TRLG, 8 bits are embedded into 2 LSBs of each 22 block. The bits arrangement of four digests and authentication bit is shown in Fig. 3. As shown, by assuming is colored, 20 bits are required for luminance (19+1 bits), and 14 bits are considered for chrominance. Totally, to provide a second chance for the primary digest, 68 bits space should be reserved for hiding information. In addition, 24 bits space are required for embedding two copies of the secondary digest. In other words, for each 22 blocks, six bits are reserved for secondary digests. Totally, 92 bits for digests and 4 bits for authentication are combined to embed into 2 LSBs of each 44 block in each plane. Subsequently, 20 bits (19+1 bits) are considered for the primary digest, and 8 bits are reserved for embedding two copies of secondary digests for gray images. Similarly, 28 bits digest and 4 bits authentication are combined for embedding into 2 LSBs of each 44 block in the next phase.
Hence, Let the watermark must be embedded in each block be where achieved by using Eq. (14):
where and are represented index of each plane and string joint operator, respectively. At the end of this phase, 8 bits which will be embedded into 22 are encapsulated in each element of to embed in next phase.
3.1.5 Encrypting and embedding watermark
In this phase, first, the watermark of each block must be depended to content of current block and its neighbors. Due to this strategy, TRLG is able to detect security tampering that applied based on collage, vector-quantization or protocol attacks. The detail of this strategy is explained below:
The candidate version of is obtained by Eq. (3):
This process should be repeated for all 22 blocks.
In this step, the relations between 22 sub-blocks of each 44 block in is computed. To do so, of size is partitioned into non-overlapping blocks of 22 pixels, and the th block which is expressed as:
Now, the relations between pixels of is calculated by using Eq. (4):
Finally, the dependent watermark for each 22 block is achieved by Eq. (5):
In the following, in order to improve security and guaranty the originality of watermark, and also to prevent the predictability of the arrange and value of bits, further process are done on . To do so, the watermark bits are encrypted and permuted according to the following steps:
In this step, the permuted and encrypted watermark is achieved according Eq. (22):
where is exclusive-or operator, and is permuted function which permute bits of watermark by Eq. (23):
where , a secret key, and total number of bits ( = 13, = 8).
As said in the previous section, in TRLG to improve quality of the watermarked image, and also improve the security of watermarks, the embedded watermark in each block of are encrypted with a that intelligently selected with GA. In other words, this strategy leads to decrease the difference between watermark and original values (LSBs), and also achieve high level of security. The details of GA training will further explain in the watermark optimization sub-section. At the end of the GA training, is achieved. Now, the watermark bits are encrypted again by Eq. (25):
where is exclusive-or operator. Finally, the 24 bits (8 bits for gray) watermark are embedded into 2 LSBs of each 22 non-overlap block of in each plane. In TRLG, to decrease the difference between watermarked and original pixel, a modified LSB-Mathching with considering statistical parameter of block is proposed. It should be noted, this strategy is applied on all pixels in each planes, expect the candidate pixels and planes  choosed in encryption phase. In Algorithm. 1, the pseudo code of this technique is showed in details.
Finally, the watermark image as is achieved, and it can be transfer during communication channels.
Watermark Optimization: In TRLG to maximize the similarity between watermark bits and LSBs of pixels in each 22 block, and also to enhance security more and more GA is employed. This strategy can effectively balance the difference between watermark and original bits. To do so, the GA is applied to find optimal parameter as that will be used to decrease the difference between coefficients. The overall GA-based watermark optimization is summarized as below:
In the first step, the initial key population is randomly created, and convert them into chromosomes. Next, the watermarked image is generated based on the solutions in population.
The fitness function value is evaluated between the and by Eq. (26):
where PSNR is Peak Signal to Noise Ratio.
In the last step, the operators of selection, crossover, and mutation are applied on each chromosome to generate next generation.
These steps are continued until a predefined condition is satisfied, or a constant number of generations is exceeded. Finally, the optimal key as is achieved
3.2 Tamper detection and recovery
After receiving the suspicious watermarked image as through the public communication channels, In this phase, first, the tampered regions with 22 accuracies are located and marked, and then are recovered by valid parts of four digests which embedded in . The procedure of tamper detection and recovery is described in details as below:
3.2.1 Extracting and decrypting watermark
In this phase, the watermark bits are extracted from two LSBs of each 2 blocks of . Subsequently, the watermark bits are decrypted and depermuted to achieve the initial watermark bits. These process are explained below in details:
Firstly, the watermark bits are extacted from each 22 block. Let denote the result as .
The watermark bits are decrypted based on by Eq. (27):
where is exclusive-or operator.
To locate and decrypt the watermark bits to initial position, the process are performed in inverse direction. Hence, the watermark is reconstructed by Eq. (28):
where is achived based on Eq. (24).
|Image||TRLG||ref7 ()||ref12 ()||ref13 ()||ref14 ()||ref15 ()||ref16 ()||ref19 ()||ref20 ()||ref21 ()||ref22 ()||ref32 ()|
Note: - means image is unavailable when using the previous scheme.
3.2.2 Authenticate received image
In this phase, the authentication bits are fetched from each block, and then compared with generated bits using the previous procedure. Thereinafter, if the extracted and generated authentication bits of the block are matched, the block is marked as valid, otherwise it is invalid. The authentication steps are explained below in details:
Next, the tampered 22 blocks are recognized based on comparing the extracted and generated authentication bits by Eq. (31):
Last, the closing morphology operator is applied on as post-processing to fill gaps between tampered blocks that incorrectly mark as valid. A 55 square is used as a structure element in this step.
3.2.3 Reconstruct digest, Recover tampered regions
After authentication phase, 22 tampered block can be further recovered. To do so, first, the four primary and secondary digest are reconstructed and reshuffled to initial position. In the following, for each invalid block of according the recovery steps are triggered to correct tampered regions. The reconstruct digest and recover tampered regions procedure includes the following steps:
First of all, the four primary and secondary digests are extracted from . Then, the digests are formed according Fig. 3, and define primary digests as , and the secondary digests as .
In the following, the valid part of and are marked and updated. In the other words, is checked during extraction step to reconstruct each digests based on valid parts.
To place the coefficients into initial positions, the inverse reshuffling are applied on the plane’s coefficients of include . For this aim, each planes of , are converted into 1D matrix. Then, the coefficients of each digests are reordered and reshuffled based on Shift-aside and Mirror-aside operators, and two chaotic sequence generate based on Eq. (2) with and by Eq. (32):
At the end, the and is converted to 2D matrix with size of , and , is updated.
Now, two unique digests are generated based on valid part of each digests by Eq. (5):
where is union operator.
In this step, to reconstruct pad zeros bits to all coefficients include [, and ] based on and LSBs ignored according Eq. (6). Also, pad a zeros to chrominance components include . At the end, convert coefficients form binary to integer type.
Next, the invalid regions of chrominance components include  are reconstructed based on valid neighbors, and resized them to .
Now, inverse quantization are applied on all coefficients of each bands and update them by Eq. (34):
where and are quantization step and coefficients of wavelet bands, respectively.
A level inverse LWT is applied on [, and ], and reconstruct luminance as . Then, primary digest is converted to RGB, and denote result as .
The inverse halftone algorithm is employed on to generate the secondary digest from halftone version, and denote result as ref5 ().
Finally, the unique digest is achieved by combining valid parts of digests based on Eq. (35):