Model Watermarking for Image Processing Networks

Model Watermarking for Image Processing Networks

Abstract

Deep learning has achieved tremendous success in numerous industrial applications. As training a good model often needs massive high-quality data and computation resources, the learned models often have significant business values. However, these valuable deep models are exposed to a huge risk of infringements. For example, if the attacker has the full information of one target model including the network structure and weights, the model can be easily finetuned on new datasets. Even if the attacker can only access the output of the target model, he/she can still train another similar surrogate model by generating a large scale of input-output training pairs. How to protect the intellectual property of deep models is a very important but seriously under-researched problem. There are a few recent attempts at classification network protection only.

In this paper, we propose the first model watermarking framework for protecting image processing models. To achieve this goal, we leverage the spatial invisible watermarking mechanism. Specifically, given a black-box target model, a unified and invisible watermark is hidden into its outputs, which can be regarded as a special task-agnostic barrier. In this way, when the attacker trains one surrogate model by using the input-output pairs of the target model, the hidden watermark will be learned and extracted afterward. To enable watermarks from binary bits to high-resolution images, both traditional and deep spatial invisible watermarking mechanism are considered. Experiments demonstrate the robustness of the proposed watermarking mechanism, which can resist surrogate models learned with different network structures and objective functions. Besides deep models, the proposed method is also easy to be extended to protect data and traditional image processing algorithms.

Introduction

In recent years, deep learning has revolutionized a wide variety of tasks such as image recognition [19, 9], medical image processing [12, 13, 36, 11], speech recognition [8, 37] and natural language processing [28], and significantly outperforms traditional state-of-the-art methods. To fully utilize the strong learning capability of these deep models and avoid overfitting, a large scale of high-quality labeled data and massive computation resources are often required. Since both human annotation and computation resources are expensive, these learned models are of great business value and need to be protected. But compared to traditional image watermarking techniques, protecting the intellectual property (IP) of deep models is much more challenging. Because of the exponential search space of network structures and weights, numerous structure and weight combinations exist for one specific task. In other words, we can achieve similar or better performance even if we slightly change the structure or weights of the target model.

In the white-box case, where the full information including the detailed network structure and weights of the target model is known, one typical and effective attacking way would be fine-tuning or pruning based on the target model on new task-specific datasets. Even in the black-box case, where only the output of the target model can be accessed, we can still steal the intellectual property of the target model by using another surrogate model to imitate its behavior. Specifically, we can first generate a large scale of input-output training pairs based on the target model then directly train the surrogate model in a supervised manner by regarding the outputs of the target model as ground-truth labels.

Very recently, some research works [27, 1, 35, 21] start paying attention to the IP protection problem for deep neural networks. They often either add a parameter regularizer to the loss function or use the predictions of a special set of indicator images as the watermarks. However, deep watermarking is still a seriously under-researched field and all existing methods only consider the classification task. And in real scenarios, labeling the training data for image processing tasks, is much more complex and expensive than classification tasks, because their ground-truth labels should be pixel-wisely precise. Examples include removing all the ribs in Chest X-ray images and the rain streaks in real rainy images. Therefore protecting such image processing models is more valuable.

Motivated by this, this paper considers the deep watermarking problem for image processing models for the first time. And because the original raw model does not need to be provided in most application scenarios, they can be easily encrypted with traditional algorithms to resist the white-box attack (i.e., fine-tuned or pruned). So we mainly consider the second black-box attack case where only outputs of the target model can be obtained and attackers use surrogate models to imitate it. To resist such attacks, the designed watermarking mechanism should guarantee the watermarks can still be extracted from outputs of learned surrogate models.

Before diving into the model watermarking for image processing networks, we first discuss the simplest spatial visible watermarking mechanism shown in Figure 1. Suppose that we have a lot of input-output training pairs and we manually add a unified visible watermark template to all the outputs. Intuitively, if a surrogate model is trained on such pairs with the simple L2 loss, the learned model will learn this visible watermark into its output to get lower loss. That is to say, given one target model, if we forcibly add one unified visible watermark into all its output, it can resist the plagiarism from other surrogate models to some extent. However, the biggest limitation of this method is that the added visible watermarks will sacrifice the visual quality and usability of the target model seriously. Another potential threat is that attackers may use image editing tools like Photoshops to manually remove all the visible watermarks.

Figure 1: The simplest watermarking mechanism by adding unified visible watermarks onto the target output images, which will sacrifice the visual quality and usability.

To address the above limitations, we propose a general model watermarking framework by leveraging the spatial invisible watermarking mechanism as shown in Figure 2. Given a target model to be protected, we denote its original input and output images as domain and respectively. Then a spatial invisible watermark embedding method is used to hide a unified target watermark into all the output images in the domain and generate a new domain . Different from the above simple visible watermarks, all the images in the domain should be visually consistent to domain . Symmetrically, given the images in domain , the corresponding watermark extracting algorithm will extract the watermark out, which is consistent to . The key hypothesis here is that when the attacker uses and to learn a surrogate model , can still extract the target watermark from the output of .

We first test the effectiveness of our framework by using traditional spatial invisible watermarking algorithms like [20, 29]. It works well for some surrogate models but limited to some other ones. Another big limitation is that the information capacity they can hide is relatively low, e.g., tens of bits. To hide high capacity watermarks like logo images and achieve better robustness, we propose a novel deep invisible watermarking system shown in Figure 3, which consists of two main parts: one embedding sub-network to learn how to hide invisible watermarks into the image, and another extractor sub-network to learn how to extract the invisible watermark out. To avoid generating watermark for all the images no matter whether they have invisible watermarks or not, we also constrain it not to extract any watermark out if its input is a clean image. To further boost the robustness, another adversarial training stage is used.

Experiments show that the proposed method can resist the attack from surrogate models trained with different network structures like Resnet and UNet and different loss functions like , , perceptual loss and adversarial loss. Depending on the specific task, we find it is also possible to combine the functionality of M and H to train a task-specific H.

To summarize, our contributions are fourfold:

  • We are the first to introduce the intellectual property protection problem for image processing tasks. We hope it can draw more attention to this research field and inspire greater works.

  • We propose the first model watermarking framework to protect image processing networks by leveraging the spatial invisible watermarking mechanism.

  • We design a novel deep watermarking algorithm to improve the robustness and capacity of traditional spatial invisible watermarking methods.

  • Extensive experiments demonstrate that the proposed framework can resist the attack from surrogate models trained with different network structures and loss functions. It can also be easily extended to protect valuable data and traditional algorithms.

Related work

Media Watermarking Algorithms. Watermarking is one of the most important ways to protect media copyright. For image watermarking, many different algorithms have been proposed in the past decades, which can be roughly categorized into two types: visible watermarks like logos, and invisible watermarks. Compared to visible watermarks, invisible watermarks are more secure and robust. They are often embedded in the original spatial domain [20, 29, 4, 30], or other image transform domains such as discrete cosine transform (DCT) domain [14, 10], discrete wavelet transform (DWT) domain [2], and discrete Fourier transform (DFT) domain [25]. However, all these traditional watermarking algorithms are often only able to hide several or tens of bits, let alone explicit logo images. More importantly, we find only spatial domain watermarking work to some extent for this task and all other transform domain watermarking algorithms fail.

In recent years, some DNN-based watermarking schemes have been proposed. For example, Zhu et al.[38] propose an auto-encoder-based network architecture to realize the embedding and extracting process of watermarks. Based on it, Tancik et al.[26] further realize a camera shooting resilient watermarking scheme by adding a simulated camera shooting distortion to the noise layer. Compared to these image watermarking algorithms, model watermarking is much more challenging because of the exponential search space of deep models. But we innovatively find it possible to leverage spatial invisible watermarking techniques for model protection.

Model Watermarking Algorithms. Though watermarking for deep neural networks are still seriously under-studied, there are some recent works [27, 1, 22, 35] that start paying attention to it. For example, based on the over-parameterized property of deep neural networks, Uchida et al.[27] propose a special weight regularizer to the objective function so that the distribution of weights can be resilient to attacks such as fine-tuning and pruning. One big limitation of this method is not task-agnostic and need to know the original network structure and parameters for retraining. Adi et al.[1] use a particular set of inputs as the indicators and let the model deliberately output specific incorrect labels, however, it may not work if the network are retrained. Zhang et al.[35] associate the watermark with the actual identity by making great changes to original images, which is easy to be detected.

However, all the methods mentioned above focus on the classification tasks, which is different from the purpose of this paper: protecting higher commercial valued image processing models. We innovatively leverage spatial invisible watermarking algorithms for image processing networks and propose a new deep invisible watermarking technique to enable high-capacity watermarks(e.g., logo images).

Image-to-image Translation Networks. In the deep learning era, most image processing tasks such as image segmentation, edge to the image, deraining, and X-ray image debone, can be solved with an image-to-image translation network where the input and output are both images. Recently this field has achieved significant progress especially after the emergence of the generative adversarial network (GAN) [7]. Isola et al.propose a general image-to-image translation framework by combining adversarial training in [15], which is further improved by many following works [3, 31, 23]. The limitation of these methods is that they need a lot of pairwise training data. By introducing the cycle consistency, Zhu et al.propose a general unpaired image-to-image translation framework CycleGAN [39]. In this paper, we mainly focus on the deep models of paired image-to-image translation, because the paired training data is much more expensive to be obtained than classification datasets or unpaired datasets. More importantly, there is no prior work that has ever considered the watermarking issue for such models.

Figure 2: The proposed deep watermarking framework by leveraging spatial invisible watermarking algorithms.

Method

In this section, the details will be elaborated. Before that, we will first introduce the formal problem definition and give a simple theoretical pre-analysis to justify our hypothesis.

Problem Definition. For image processing tasks, assume the input domain is composed of massive images , and the target output domain consists of images . In this paper, we only consider the pairwise case where and are one-one matched by an implicit transformation function . Then the goal of the image processing model is to approximate by minimizing the distance between and , i.e.,

(1)

Assume we have learned a target model based on massive private image pairs and computation resources. Given each input image in domain , will output an image in domain . Then the attacker may use the image pairs defined by from domain to train another surrogate model . Our target is to design an effective deep watermarking mechanism which is able to identify once it is trained with data generated by . Because in real scenarios, it is highly possible we cannot access the raw model in a white-box way, the only indicator we can leverage is the output of . Therefore, we need to figure out one way to extract watermarks from the output of .

Theoretical Pre-analysis. In traditional watermarking algorithms, given an image and a target watermark to embed, they will first use a watermark embedding algorithm to generate an image which contains . Symmetrically, the target watermark can be further extracted out with the corresponding watermarking extracting algorithm . Considering each image is embedded with a unified watermark , where , forming another domain , there must exist a model which can learn good transformation between domain and . One simplest solution of is to directly add to the output of with a skip connection:

(2)

Based on the above observation, we propose a general deep watermarking framework for image processing models shown in Figure 2. Given a target model to protect, we add a barrier by embedding a unified watermark into all its output images before showing them to the end-users. So the surrogate model has to be trained with the image pair from domain with watermark, instead of the original pair from domain . No matter what architecture adopts, it’s behavior will approach to that of in preserving the unified watermark . Otherwise, its objective loss function cannot achieve a lower value. And then the watermarking extracting algorithm can extract the watermark from the output of .

To ensure the watermarked output image is visually consistent with the original one , only spatial invisible watermarking algorithms are considered in this paper. Below we will try both traditional spatial invisible watermarking algorithm and a novel deep invisible watermarking algorithm.

Traditional Spatial Invisible Watermarking. Additive-based embedding is the most common method used in the traditional spatial invisible watermarking scheme. The watermark is first spread to a sequence or block which satisfies a certain distribution, then embedded into the corresponding coefficients of the host image. This embedding procedure can be formulated by

(3)

where and indicate the original image and embedded image respectively. indicates the embedding intensity and denote the spread image block that represents bit “”(). In the extraction side, the watermark is determined by detecting the distribution of the corresponding coefficients. The robustness of such an algorithm is guaranteed by the spread spectrum operation. The redundancy brought by the spread spectrum makes a strong error correction ability of the watermark so that the distribution of the block will not change a lot even after image processing.

However, such algorithms often have very limited embedding capacity because many extra redundant bits are needed to ensure robustness. In fact, in many application scenarios, the IP owners may want to embed some special images (e.g., logos) explicitly, which is nearly infeasible for these algorithms. More importantly, the following experiments show that these traditional algorithms can only resist some special types of surrogate models. To enable more high-capacity watermarks and more robust resistance ability, we propose a new deep invisible watermarking algorithm and utilize a two-stage training strategy as shown in Figure 3.

Figure 3: The overall pipeline of the proposed deep invisible watermarking algorithm and two-stage training strategy. In the first training stage, a basic watermark embedding sub-network and extractor sub-network are trained. Then another surrogate network is leveraged as the adversarial competitor to further enhance the extracting ability of .

Deep Invisible Watermarking. To embed an image watermark into host images of the domain and extract it out afterward, one embedding sub-network and one extractor sub-network are adopted respectively. Without sacrificing the original image quality of domain , we require images with the hidden watermark should be still visually consistent with the original images in the domain . Since adversarial networks have demonstrated their power in reducing the domain gap in many different tasks, we append one discriminator network after to further improve the image quality of domain . During training, we find if the extractor network is only trained with the images of domain , it is very easy to overfit and output the target watermark no matter whether the input images contain watermarks or not. To avoid it, we also feed the images of domain and domain that do not contain watermarks into and force it to output a constant blank image. In this way, will have the real ability to extract watermarks only when the input image has the watermark in it.

Based on the pre-analysis, when the attacker uses a surrogate model to imitate the target model based on the input domain and watermarked domain , will learn the hidden watermark into its output thanks to the inherent fitting property of deep networks. Despite of higher hiding capacity, similar to traditional watermarking algorithms, the extractor sub-network cannot extract the watermarks out from the output of the surrogate model neither if only with this initial training stage. This is because has only observed clean watermarked images but not the watermarked images from surrogate models which may contain some unpleasant noises. To further enhance the extracting ability of , we choose one simple surrogate network to imitate the attackers’ behavior and fine-tune on the mixed dataset of domain . Experiments show this will significantly boost the extracting ability of and resist other types of surrogate models.

Network Structures. In our method, we adopt the UNet [24] as the default network structure of and , which has been widely used by many translation based tasks like [15, 39]. It performs especially well for tasks where the output image shares some common properties of input image by multi-scale skip connections. But for the extractor sub-network whose output is different from the input, we find CEILNet [5] works much better. It also follows an auto-encoder like network structure. In details, the encoder consists of three convolutional layers, and the decoder consists of one deconvolutional layer and two convolutional layers symmetrically. To enhance the learning capacity, nine residual blocks are inserted between the encoder and decoder. For the discriminator , we adopt the PatchGAN [15] by default. Note that except for the extractor sub-network, we find other types of translation networks also work well in our framework, which demonstrates the strong generalization ability of our framework.

Loss Functions. The objective loss function of our method consists of two parts: the embedding loss and the extracting loss , i.e.,

(4)

where is the hyper parameter to balance these two loss terms. Below we will introduce the detailed formulation of and respectively.

Embedding Loss. To embed the watermark image while guaranteeing the original visual quality, three different types of visual consistency loss are considered: the basic 2 loss , perceptual loss , and adversarial loss , i.e.,

(5)

Here the basic 2 loss is simply the pixel value difference between the input host image and the watermarked output image , is the total pixel number, i.e.,

(6)

And the perceptual loss [16] is defined as the difference between the VGG feature of and :

(7)

where denotes the features extracted at layer (“conv2_2” by default), and denotes the total feature neuron number. To further improve the visual quality and minimize the domain gap between and , the adversarial loss will let the embedding sub-network hide watermarks better so that the discriminator cannot differentiate its output from real watermark-free images in , i.e.,

(8)

Extracting Loss. The responsibility of the extractor sub-network has two aspects: it should be able to extract the target watermark out for watermarked images from and output a constant blank image for watermark-free images from . So the first two terms of are the reconstruction loss and for these two types of images respectively, i.e.,

(9)

where is the constant blank watermark image. Besides reconstruction loss, we also want the watermarks extracted from different watermarked images to be consistent, thus another consistent loss is added:

(10)

Then is defined as the weighted sum of these three terms, i.e.,

(11)

Adversarial Training Stage. With the above initial training stage, only observes the clean watermarked images and cannot generalize well for the noisy watermarked output of some surrogate models. To enhance its extracting ability, an extra adversarial training stage is added. Specifically, one surrogate model is trained with the simple 2 loss by default. Denote the outputs of as , we further fine-tune on the mixed dataset in this stage.

Experiments

In this paper, two examples of image processing tasks are conducted: image deraining and Chest X-ray image debone. The goal of these two tasks is to remove the rain streak and rib components from the input images respectively. To demonstrate the effectiveness of our method, we first show the newly introduced deep invisible watermarking algorithm can hide high-capacity image-based watermarks, then evaluate the robustness of the proposed deep watermarking framework to different surrogate models. Finally, some ablation analysis is provided to justify the motivation of our design and shed some light on more inspiring potentials.

Implementation Details. For image deraining, we use 12100 images from the PASCAL VOC dataset as target domain , and use the synthesis algorithm in [34] to generate rainy images as domain . These images are split into three parts: 6000 both for the initial and adversarial training, 6000 to train the surrogate model and 100 for testing. Similarly, for X-ray image debone, we select 6100 high-quality chest X-ray images from the open dataset chestx-ray8 [32] and use the rib suppression algorithm proposed by [33] to generate the training pair. They are also divided into three parts: 3000 both for the initial and adversarial training, 3000 to train the surrogate model and 100 for testing. By default, all equal to 1 and .

Evaluation Metric. To evaluate the visual quality, PSNR and SSIM are used by default. To judge whether the watermark is extracted successfully, we use the classic normalized correlation (NC) metric as previous watermarking methods. The watermark is regarded as successfully extracted if its NC value is bigger than . Based on it, the success rate (SR) is further defined as the ratio of watermarked images whose hidden watermark is successfully extracted.

Deep Image Based Invisible Watermarking. In this experiment, we give both quantitative and qualitative results about the proposed deep image based invisible watermarking algorithm. For debone and deraining, one AAAI logo image and a colorful flower image are used as the example watermark image respectively. As shown Table 1, the embedding sub-network and the extractor sub-network collaborate very well. can hide the image watermark into the host images invisibly with high visual quality (average PSNR 39.98 and 47.89 for derain and debone respectively), and can extract these hidden watermarks out afterward with an average NC value over 0.99 (100% success rate). Two visual examples are shown in Figure 4.

Task PSNR SSIM NC
Debone-aaai 47.89 0.99 0.9999
Derain-flower 39.98 0.99 0.9966
Table 1: Quantitative results of the proposed invisible image based watermarking. *-aaai and *-flower use the AAAI logo and a colorful flower image as watermarks respectively.
Figure 4: Two examples of hiding image watermark into host images with the proposed invisible watermarking algorithm.

Robustness to The Attack from Surrogate Models. To evaluate the final robustness of the proposed deep watermarking framework, we use a lot of surrogate models that are trained with different network structures and objective loss functions to imitate the attackers’ behavior. Here four different types of network structures are considered: vanilla convolutional networks only consisting of several convolutional layers (“CNet”), an auto-encoder like networks with 9 and 16 residual blocks (“Res9”, “Res16”), and the aforementioned UNet network (“UNet”). For objective loss functions, some popular loss functions like 1, 2, perceptual loss , adversarial loss and their combination are considered. Since one surrogate model with “UNet” and 2 loss function is leveraged in the adversarial training stage, this configuration can be viewed as a white-box attack and all other configurations are black-box attacks.

Setting T-Debone T-Derain D-Debone D-Derain D-Debone D-Derain
CNet 0% 0% 92% 100% 0% 0%
Res9 0% 0% 100% 100% 0% 0%
Res16 0% 0% 100% 100% 0% 0%
UNet 100% 100% 100% 100% 0% 0%
Table 2: The success rate (SR) of resisting the attack from surrogate models trained with 2 loss but different network structures. T-* means the results of using traditional spatial invisible watermarking algorithms to hide 64-bit, while D-* means that of the proposed deep invisible watermarking algorithm to hide watermark images. denotes the results without adversarial training.

Due to the limited computation resource, we do not consider all the combinations of different network structures and loss functions. Instead, we choose to conduct the control experiments to demonstrate the robustness to the network structures and loss functions respectively. In the Table 2, both traditional spatial bit-based invisible watermarking algorithms (hide 64-bit) and the proposed deep image-based invisible watermarking algorithm are tested. Though only UNet based surrogate model trained with loss is leveraged in the adversarial training stage, we find the proposed deep model watermarking framework can resist both white-box and black-box attacks when equipped with the newly proposed deep image-based invisible watermarking technique. For traditional watermarking algorithms, they can only resist the attack of some special surrogate models because their extracting algorithms cannot handle noisy watermarked images from different surrogate models. More importantly, they cannot hide high-capacity watermarks like logo images. We have also tried many traditional transform domain watermarking algorithms like DCT-based[6],DFT-based[18] and DWT-based[17] but all of them do not work and achieve success rate.

To further demonstrate the robustness to different losses, we use the UNet as the default network structure and train surrogate models with different combinations of loss functions. As shown in Table 3, the proposed deep watermarking framework has a very strong generalization ability and can resist different loss combinations with very high success rate. Since in real user scenarios, detailed network structure and training objective functions are the parts of the surrogate model that attacker may often change, we have enough reasons to think the proposed deep watermarking framework is applicable in these cases.

Task 1 1 + 2 2 + +
D-Debone 100% 100% 100% 100% 88% 92%
D-Derain 100% 100% 100% 100% 86% 100%
D-Debone 0% 98% 0% 100% 0% 0%
D-Derain 0% 0% 0% 100% 24% 0%
Table 3: The success rate (SR) of resisting the attack from surrogate models that are trained with different loss combinations. means the results without adversarial training.

Ablation Study

Figure 5: Comparison results with (first row) and without (second row) clean loss. The second and last column are the extracted watermarks from the watermark-free images from domain respectively.

The Importance of Clean Loss and Consistent Loss. Besides the watermark reconstruction loss, we add one clean loss and consistent loss into the extracting loss. To demonstrate their importance, two control experiments are conducted. As shown in Figure 5, without clean loss, the extractor will always extract meaningless watermark from watermark-free images of domain . Especially for images of domain , the extracted watermarks have a quite large NC value and make the forensics meaningless. Similarly in Figure 6, we find the extractor can only extract very weak watermarks or even cannot extract any watermark out when training without consistent loss. By contrast, our method can always extract very clear watermarks out.

The Importance of Adversarial Training. As described above, to enhance the extracting ability of , another adversarial training stage is used. To demonstrate its necessity, we also conduct the control experiments without adversarial training, and attach the corresponding results in Table 2 and Table 3 (labelled with ). It can be seen that, with the default 2 loss, its resisting success rate is all about for surrogate models of different network structures. When using UNet as the network structure but training with different losses, we find only some special surrogate models can partially extract the hidden watermarks, which demonstrates the significant importance of the adversarial training.

Figure 6: Comparison results with (first row) and without (second row) consistent loss. The last column is the extracted watermark from the output of surrogate model.

Task Specific Deep Invisible Watermarks. In our default setting, the embedding and extractor sub-network are task-agnostic and appended as a general barrier. In this experiment, we try a more challenging task that lets be task-specific. Take debone as the example, the input image of is directly the image of the domain with rib components now, and we want to remove the rib components and hide watermarks simultaneously. In such a case, is the final watermarked target model . For comparison, we also train a baseline debone model without the need of hiding watermarks. We find the above task-specific watermarked model can achieve very comparable results (PSNR: 24.49, SSIM: 0.91) to this baseline model (PSNR: 25.81, SSIM: 0.91).

Extension to Protect Data and Traditional Algorithms. Though our motivation is to protect deep models, the proposed framework is easy to be extended to protect valuable data or traditional algorithms by directly embedding the watermarks into their labeled groundtruth images or outputs.

Conclusion

We introduce the deep watermarking problem for image processing networks for the first time.Inspired by traditional spatial invisible media watermarking, the first deep watermarking framework is proposed. To make it robust to different surrogate models and support image-based watermarks, we propose a novel deep invisible watermarking technique. Experiments demonstrate that our framework can resist the attack from surrogate models trained with different network structures and loss functions. We hope this work can inspire more great works for this seriously under-researched field.

Acknowledgments

This work was partially supported by the Natural Science Foundation of China under Grant U1636201, 61572452.

References

  1. Y. Adi, C. Baum, M. Cisse, B. Pinkas and J. Keshet (2018) Turning your weakness into a strength: watermarking deep neural networks by backdooring. In USENIX, Cited by: Introduction, Related work.
  2. M. Barni, F. Bartolini and A. Piva (2001) Improved wavelet-based watermarking through pixel-wise masking. TIP 10 (5), pp. 783–791. Cited by: Related work.
  3. Y. Choi, M. Choi, M. Kim, J. Ha, S. Kim and J. Choo (2018) Stargan: unified generative adversarial networks for multi-domain image-to-image translation. In CVPR, Cited by: Related work.
  4. F. Deguillaume, S. V. Voloshynovskiy and T. Pun (2002) Method for the estimation and recovering from general affine transforms in digital watermarking applications. In SWMC, Vol. 4675, pp. 313–322. Cited by: Related work.
  5. Q. Fan, J. Yang, G. Hua, B. Chen and D. Wipf (2017) A generic deep architecture for single image reflection removal and image smoothing. In ICCV, pp. 3238–3247. Cited by: Method.
  6. H. Fang, W. Zhang, H. Zhou, H. Cui and N. Yu (2018) Screen-shooting resilient watermarking. TIFS. Cited by: Experiments.
  7. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville and Y. Bengio (2014) Generative adversarial nets. In NIPS, pp. 2672–2680. Cited by: Related work.
  8. A. Graves, A. Mohamed and G. Hinton (2013) Speech recognition with deep recurrent neural networks. In ICASSP, pp. 6645–6649. Cited by: Introduction.
  9. K. He, X. Zhang, S. Ren and J. Sun (2016) Deep residual learning for image recognition. In CVPR, pp. 770–778. Cited by: Introduction.
  10. J. R. Hernandez, M. Amado and F. Perez-Gonzalez (2000) DCT-domain watermarking techniques for still images: detector performance analysis and a new structure. TIP. Cited by: Related work.
  11. S. Hong, M. Wu, H. Li and Z. Wu (2017) Event2vec: learning representations of events on temporal sequences. In APWEB-WAIM, pp. 33–47. Cited by: Introduction.
  12. S. Hong, M. Wu, Y. Zhou, Q. Wang, J. Shang, H. Li and J. Xie (2017) ENCASE: an ensemble classifier for ecg classification using expert features and deep neural networks. In CinC, Cited by: Introduction.
  13. S. Hong, Y. Zhou, M. Wu, J. Shang, Q. Wang, H. Li and J. Xie (2019) Combining deep neural networks and engineered features for cardiac arrhythmia detection from ecg recordings. PMEA. Cited by: Introduction.
  14. C. Hsu and J. Wu (1999) Hidden digital watermarks in images. TIP 8 (1), pp. 58–68. Cited by: Related work.
  15. P. Isola, J. Zhu, T. Zhou and A. A. Efros (2017) Image-to-image translation with conditional adversarial networks. CVPR. Cited by: Related work, Method.
  16. J. Johnson, A. Alahi and L. Fei-Fei (2016) Perceptual losses for real-time style transfer and super-resolution. In ECCV, pp. 694–711. Cited by: Method.
  17. X. Kang, J. Huang, Y. Q. Shi and Y. Lin (2003) A dwt-dft composite watermarking scheme robust to both affine transform and jpeg compression. TCSVT 13 (8), pp. 776–786. Cited by: Experiments.
  18. X. Kang, J. Huang and W. Zeng (2010) Efficient general print-scanning resilient data hiding based on uniform log-polar mapping. TIFS 5 (1), pp. 1–12. Cited by: Experiments.
  19. A. Krizhevsky, I. Sutskever and G. E. Hinton (2012) Imagenet classification with deep convolutional neural networks. In NIPS, pp. 1097–1105. Cited by: Introduction.
  20. M. Kutter (1999) Watermarking resistance to translation, rotation, and scaling. In MSA, Vol. 3528, pp. 423–431. Cited by: Introduction, Related work.
  21. E. L. Merrer, P. Perez and G. Trédan (2017) Adversarial frontier stitching for remote neural network watermarking. arXiv. Cited by: Introduction.
  22. Y. Nagai, Y. Uchida, S. Sakazawa and S. Satoh (2018) Digital watermarking for deep neural networks. IJMIR. Cited by: Related work.
  23. T. Park, M. Liu, T. Wang and J. Zhu (2019) Semantic image synthesis with spatially-adaptive normalization. In CVPR, pp. 2337–2346. Cited by: Related work.
  24. O. Ronneberger, P. Fischer and T. Brox (2015) U-net: convolutional networks for biomedical image segmentation. In MICCAI, pp. 234–241. Cited by: Method.
  25. J. Ruanaidh, W. Dowling and F. M. Boland (1996) Phase watermarking of digital images. In ICIP, Cited by: Related work.
  26. M. Tancik, B. Mildenhall and R. Ng (2019) StegaStamp: invisible hyperlinks in physical photographs. arXiv. Cited by: Related work.
  27. Y. Uchida, Y. Nagai, S. Sakazawa and S. Satoh (2017) Embedding watermarks into deep neural networks. In ICMR, pp. 269–277. Cited by: Introduction, Related work.
  28. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser and I. Polosukhin (2017) Attention is all you need. In NIPS, pp. 5998–6008. Cited by: Introduction.
  29. S. Voloshynovskiy, F. Deguillaume and T. Pun (2000) Content adaptive watermarking based on a stochastic multiresolution image modeling. In EUSIPCO, pp. 1–4. Cited by: Introduction, Related work.
  30. S. Voloshynovskiy, F. Deguillaume and T. Pun (2001) Multibit digital watermarking robust against local nonlinear geometrical distortions. In ICIP, Vol. 3, pp. 999–1002. Cited by: Related work.
  31. T. Wang, M. Liu, J. Zhu, A. Tao, J. Kautz and B. Catanzaro (2018) High-resolution image synthesis and semantic manipulation with conditional gans. In CVPR, Cited by: Related work.
  32. X. Wang, Y. Peng, L. Lu, Z. Lu, M. Bagheri and R. M. Summers (2017) Chestx-ray8: hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In CVPR, Cited by: Experiments.
  33. W. Yang, Y. Chen, Y. Liu, L. Zhong, G. Qin, Z. Lu, Q. Feng and W. Chen (2017) Cascade of multi-scale convolutional neural networks for bone suppression of chest radiographs in gradient domain. Medical image analysis 35. Cited by: Experiments.
  34. H. Zhang and V. M. Patel (2018) Density-aware single image de-raining using a multi-stream dense network. In CVPR, pp. 695–704. Cited by: Experiments.
  35. J. Zhang, Z. Gu, J. Jang, H. Wu, M. P. Stoecklin, H. Huang and I. Molloy (2018) Protecting intellectual property of deep neural networks with watermarking. In ASIACCS, pp. 159–172. Cited by: Introduction, Related work.
  36. J. Zhang, Y. Chen, S. Hong and H. Li (2017) REBUILD: graph embedding based method for user social role identity on mobile communication network. In DMBD, Cited by: Introduction.
  37. Y. Zhang, M. Pezeshki, P. Brakel, S. Zhang, C. L. Y. Bengio and A. Courville (2017) Towards end-to-end speech recognition with deep convolutional neural networks. arXiv. Cited by: Introduction.
  38. J. Zhu, R. Kaplan, J. Johnson and L. Fei-Fei (2018) Hidden: hiding data with deep networks. In ECCV, pp. 657–672. Cited by: Related work.
  39. J. Zhu, T. Park, P. Isola and A. A. Efros (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In ICCV, pp. 2223–2232. Cited by: Related work, Method.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
409303
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description