Structured Output Learning with Conditional Generative Flows

Structured Output Learning with Conditional Generative Flows

You Lu
Department of Computer Science
Virginia Tech
Blacksburg, VA
you.lu@vt.edu &Bert Huang
Department of Computer Science
Virginia Tech
Blacksburg, VA
bhuang@vt.edu
Abstract

Traditional structured prediction models try to learn the conditional likelihood, i.e., , to capture the relationship between the structured output and the input features . For many models, computing the likelihood is intractable. These models are therefore hard to train, requiring the use of surrogate objectives or variational inference to approximate likelihood. In this paper, we propose conditional Glow (c-Glow), a conditional generative flow for structured output learning. C-Glow benefits from the ability of flow-based models to compute exactly and efficiently. Learning with c-Glow does not require a surrogate objective or performing inference during training. Once trained, we can directly and efficiently generate conditional samples. We develop a sample-based prediction method, which can use this advantage to do efficient and effective inference. In our experiments, we test c-Glow on five different tasks. C-Glow outperforms the state-of-the-art baselines in some tasks and predicts comparable outputs in the other tasks. The results show that c-Glow is versatile and is applicable to many different structured prediction problems.

1 Introduction

Structured prediction models are widely used in tasks such as image segmentation [Nowozin and Lampert2011] and sequence labeling [Lafferty, McCallum, and Pereira2001]. In these structured output tasks, the goal is to model a mapping from the input to the high-dimensional structured output . In many such problems, it is also important to make diverse predictions to capture the variability of plausible solutions to the structured output problem [Sohn, Lee, and Yan2015].

Many existing methods for structured output learning use graphical models, such as conditional random fields (CRFs) [Wainwright and Jordan2008], and approximate the conditional distribution . Approximation is necessary because, for most graphical models, computing the exact likelihood is intractable. Recently, deep structured prediction models [Chen et al.2015, Zheng et al.2015, Sohn, Lee, and Yan2015, Wang, Fidler, and Urtasun2016, Belanger and McCallum2016, Graber, Meshi, and Schwing2018] combine deep neural networks with graphical models, using the power of deep neural networks to extract high-quality features and graphical models to model correlations and dependencies among variables. The main drawback of these approaches is that, due to the intractable likelihood, they are difficult to train. Training them requires the construction of surrogate objectives, or approximating the likelihood by using variational inference to infer latent variables. Moreover, once the model is trained, inference and sampling from CRFs require expensive iterative procedures [Koller and Friedman2009].

In this paper, we develop conditional generative flows (c-Glow) for structured output learning. Our model is a variant of Glow [Kingma and Dhariwal2018], with additional neural networks for capturing the relationship between input features and structured output variables. Compared to most methods for structured output learning, c-Glow has the unique advantage that it can directly model the conditional distribution without restrictive assumptions (e.g., variables being fully connected [Krähenbühl and Koltun2011]). We can train c-Glow by exploiting the fact that invertible flows allow exact computation of log-likelihood, removing the need for surrogates or inference. Compared to other methods using normalizing flows (e.g., [Trippe and Turner2018, Kingma and Dhariwal2018]), c-Glow’s output label is conditioned on both complex input and a high-dimensional tensor rather than a one-dimensional scalar. We evaluate c-Glow on five structured prediction tasks: binary segmentation, multi-class segmentation, color image denoising, depth refinement, and image inpainting, finding that c-Glow’s exact likelihood training is able to learn models that efficiently predict structured outputs of comparable quality to state-of-the-art deep structured prediction approaches.

2 Related Work

There are two main topics of research related to our paper: deep structured prediction and normalizing flows. In this section, we briefly cover some of the most related literature.

2.1 Deep Structured Models

One emerging strategy to construct deep structured models is to combine deep neural networks with graphical models. However, this kind of model can be difficult to train, since the likelihood of graphical models is usually intractable. \citeauthorchen2015learning (\citeyearchen2015learning) proposed joint learning approaches that blend the learning and approximate inference to alleviate some of these computational challenges. \citeauthorzheng2015conditional (\citeyearzheng2015conditional) proposed CRF-RNN, a method that treats mean-field variational CRF inference as a recurrent neural network to allow gradient-based learning of model parameters. \citeauthorwang2016proximal (\citeyearwang2016proximal) proposed proximal methods for inference. And \citeauthorsohn2015learning (\citeyearsohn2015learning) used variational autoencoders [Kingma and Welling2013] to generate latent variables for predicting the output. While using a surrogate for the true likelihood is generally viewed as a concession, \citeauthornorouzi2016reward (\citeyearnorouzi2016reward) found that training with a tractable task-specific loss often yielded better performance for the goal of reducing specific task losses than training with general-purpose likelihood approximations. Their analysis hints that fitting a distribution with a true likelihood may not always train the best predictor for specific tasks.

Another direction combining structured output learning with deep models is to construct energy functions with deep networks. Structured prediction energy networks (SPENs) [Belanger and McCallum2016] define energy functions for scoring structured outputs as differentiable deep networks. The likelihood of a SPEN is intractable, so the authors used structured SVM loss to learn. SPENs can also be trained in an end-to-end learning framework [Belanger, Yang, and McCallum2017] based on unrolled optimization. Methods to alleviate the cost of SPEN inference include replacing the argmax inference with an inference network [Tu and Gimpel2018]. Inspired by Q-learning, \citeauthorgygli2017deep (\citeyeargygli2017deep) used an oracle value function as the objective for energy-based deep networks. \citeauthorgraber2018deep (\citeyeargraber2018deep) generalized SPENs by adding non-linear transformations on top of the score function.

2.2 Normalizing Flows

Normalizing flows are neural networks constructed with fully invertible components. The invertibility of the resulting network provides various mathematical benefits. Normalizing flows have been successfully used to build likelihood-based deep generative models [Dinh, Krueger, and Bengio2014, Dinh, Sohl-Dickstein, and Bengio2016, Kingma and Dhariwal2018] and to improve variational approximation [Rezende and Mohamed2015, Kingma et al.2016]. Autoregressive flows [Kingma et al.2016, Papamakarios, Pavlakou, and Murray2017, Huang et al.2018, Ziegler and Rush2019] condition each affine transformation on all previous variables, so that they ensure an invertible transformation and triangular Jacobian matrix. Continuous normalizing flows [Chen et al.2018, Grathwohl et al.2018] define the transformation function using ordinary differential equations. While most normalizing flow models define generative models, \citeauthortrippe2018conditional (\citeyeartrippe2018conditional) developed radial flows to model univariate conditional probabilities.

Most related to our approach are flow-based generative models for complex output. \citeauthordinh2014nice (\citeyeardinh2014nice) first proposed a flow-based model, NICE, for modeling complex high-dimensional densities. They later proposed Real-NVP [Dinh, Sohl-Dickstein, and Bengio2016], which improves the expressiveness of NICE by adding more flexible coupling layers. The Glow model [Kingma and Dhariwal2018] further improved the performance of such approaches by incorporating new invertible layers. Most recently, Flow++ [Ho et al.2019] improved generative flows with variational dequantization and architecture design, and \citeauthorhoogeboom2019emerging (\citeyearhoogeboom2019emerging) proposed new invertible convolutional layers for flow-based models.

3 Background

In this section, we introduce notation and background knowledge directly related to our work.

3.1 Structured Output Learning

Let and be random variables with unknown true distribution . We collect a dataset , where is the th input and is the corresponding output. We approximate with a model and minimize the negative log-likelihood

In structured output learning, the label comes from a complex, high-dimensional output space with dependencies among output dimensions. Many structured output learning approaches use an energy-based model to define a conditional distribution:

where is the energy function. In deep structured prediction, depends on via a deep network. Due to the high dimensionality of , the partition function, i.e., , is intractable. To train the model, we need methods to approximate the partition function such as variational inference or surrogate objectives, resulting in complicated training and sub-optimal results.

3.2 Conditional Normalizing Flows

A normalizing flow is a composition of invertible functions , which transforms the target to a latent code drawn from a simple distribution. In conditional normalizing flows [Trippe and Turner2018], we rewrite each function as , making it parameterized by both and its parameter . Thus, with the change of variables formula, we can rewrite the conditional likelihood as

(1)

where , , and .

In this paper, we address the structured output problem by using normalizing flows. That is, we directly use the conditional normalizing flows, i.e., Equation 1, to calculate the conditional distribution. Thus, the model can be trained by locally optimizing the exact likelihood. Note that conditional normalizing flows have been used for conditional density estimation. \citeauthortrippe2018conditional (\citeyeartrippe2018conditional) use it to solve the one-dimensional regression problem. Our method is different from theirs in that the labels in our problem are high-dimensional tensors rather than scalars. We therefore will build on recently developed methods for (unconditional) flow-based generative models for high-dimensional data.

3.3 Glow

Glow [Kingma and Dhariwal2018] is a flow-based generative model that extends other flow-based models: NICE [Dinh, Krueger, and Bengio2014] and Real-NVP [Dinh, Sohl-Dickstein, and Bengio2016]. Glow’s modifications have demonstrated significant improvements in likelihood and sample quality for natural images. The model mainly consists of three components. Let and be the input and output of a layer, whose shape is , with spatial dimensions and channel dimension . The three components are as follows.

Actnorm layers. Each activation normalization (actnorm) layer performs an affine transformation of activations using two parameters, i.e., a scalar , and a bias . The transformation can be written as

where is the element-wise product.

Invertible 11 convolutional layers. Each invertible 1x1 convolutional layer is a generalization of a permutation operation. Its function format is

where is a weight matrix.

Affine layers. As in the NICE and Real-NVP models, Glow also has affine coupling layers to capture the correlations among spatial dimensions. Its transformation is

where NN is a neural network, and the and functions perform operations along the channel dimension. The and vectors have the same size as .

Glow uses a multi-scale architecture [Dinh, Sohl-Dickstein, and Bengio2016] to combine the layers. This architecture has a “squeeze” layer for shuffling the variables and a “split” layer for reducing the computational cost.

4 Conditional Generative Flows for Structured Output Learning

(a) Glow architecture
(b) c-Glow architecture
Figure 1: Model architectures for Glow and conditional Glow. For each model, the left sub-graph is the architecture of each step, and the right sub-graph is the whole architecture. The parameter represents the number of levels, and represents the depth of each level.

This section describes our conditional generative flow (c-Glow), a flow-based model for structured prediction.

4.1 Conditional Glow

To modify Glow to be a conditional generative flow, we need to add conditioning architectures to its three components: the actnorm layer, the 11 convolutional layer, and the affine coupling layer. The main idea is to use a neural network, which we refer to as a conditioning network (CN), to generate the parameter weights for each layer. The details are as follows.

Conditional actnorm. The parameters of an actnorm layer are two vectors, i.e., the scale and the bias . In conditional Glow, we use a CN to generate these two vectors and then use them to transform the variable, i.e.,

Conditional 11 convolutional. The 11 convolutional layer uses a weight matrix to permute each spatial dimension’s variable. In conditional Glow, we use a conditioning network to generate this matrix:

Conditional affine coupling. The affine coupling layer separates the input variable into two halves, i.e., and . It uses as the input to an NN to generate scale and bias parameters for . To build a conditional affine coupling layer, we use a CN to extract features from , and then we concatenate it with to form the input of NN.

We can still use the multi-scale architecture to combine these conditional components to preserve the efficiency of computation. Figure 1 illustrates the Glow and c-Glow architectures for comparison.

Since the conditioning networks do not need to be invertible when optimizing a conditional model, we define the general approach without restrictions to their architectures here. Any differentiable network suffices and preserves the ability of c-Glow to compute the exact conditional likelihood of each input-output pair. We will specify the architectures we use in our experiments in Section 5.1.

4.2 Learning

To learn the model parameters, we can take advantage of the efficiently computable log-likelihood for flow-based models. In cases where the output is continuous, the likelihood calculation is direct. Therefore, we can back-propagate to differentiate the exact conditional likelihood, i.e., Eq. 1, and optimize all c-Glow parameters using gradient methods.

In cases where the output is discrete, we follow [Dinh, Sohl-Dickstein, and Bengio2016, Kingma and Dhariwal2018, Ho et al.2019] and add uniform noise to during training to dequantize the data. This procedure augments the dataset and prevents model collapse. We can still use back-propagation and gradient methods to optimize the likelihood of this approximate continuous distribution. By expanding the proofs by \citeauthortheis2015note (\citeyeartheis2015note) and \citeauthorho2019flow (\citeyearho2019flow), we can show that the discrete distribution is lower-bounded by this continuous distribution.

With a slight abuse of notation, we let be our discrete hypothesis distribution and be the dequantized continuous model. Then our goal is to maximize the likelihood , which can be expressed by marginalizing over values of that round to :

where is the variable’s dimension, and represents the difference between the continuous variable and the rounded, quantized .

Let be the true data distribution, and be the distribution of the dequantized dataset. The learning process maximizes . We expand this and apply Jensen’s Inequality to obtain the bound:

Therefore, when is discrete, the learning optimization, which maximizes the continuous likelihood , maximizes a lower bound on .

4.3 Inference

Given a learned model , we can perform efficient sampling with a single forward pass through the c-Glow. We first calculate the transformation functions given and then sample the latent code from . Finally, we propagate the sampled through the model, and we get the corresponding sample . The whole process can be summarized as

(2)

where is the inverse function.

The core task in structured output learning is to predict the best output, i.e., , for an input . This process can be formalized as looking for an optimized such that

(3)

To compute Equation 3, we can use gradient-based optimization, e.g., to optimize based on gradient descent. However, in our experiments, we found that this method is always slow, i.e., it takes thousands of iterations to converge. Worse, since the probability density function is non-convex with a highly multi-modal surface, it often gets stuck in local optima, resulting in sub-optimal prediction. Therefore, we use a sample-based method to approximate the inference instead. Let be samples drawn from . Estimated marginal expectations for each variable can be computed from the average:

(4)

This sample-based method can overcome the gradient-based method’s problems. In our experiments, we found that we only need samples to get a high quality prediction, so inference is faster. The sample average can smooth out some anomalous values, further improving prediction. One illustration of difference between the gradient-based method and the sample-based method is in Figure 2.

When is a continuous variable, we can directly get from the above sample-based prediction. When is discrete, we follow previous literature [Belanger and McCallum2016, Gygli, Norouzi, and Angelova2017] to round to discrete values. In our experiments, we find that the predicted values are already near integral values.

Input Image              Ground Truth               Gradient-based               Sampled-based

Figure 2: Illustration of difference between a gradient-based method and a sample-based method. From left to right: the input image, the ground truth label, the gradient-based prediction, and the sample-based prediction. In the third image, the horse has a horn on its back. This is because the gradient-based method is trapped into a local optimum, which assumes the head of this horse should be in that place. In the fourth image, the sample average smooths out the horn because most samples do not have the horn mistake.

5 Experiments

In this section, we evaluate c-Glow on five structured prediction tasks: binary segmentation, multi-class segmentation, image denoising, depth refinement, and image inpainting. We find c-Glow is among the class of state-of-the-art methods while retaining its likelihood and sampling benefits.

5.1 Architecture and Setup

To specify a c-Glow architecture, we need to define conditioning networks that generate weights for the conditional actnorm, 11 convolutional, and affine layers.

For the conditional actnorm layer, we use a six-layer conditioning network. The first three layers are convolutional layers that downscale the input to a reasonable size. The last three layers are then fully connected layers, which transform the resized to the scale and the bias vectors. For the downscaling convolutional layers, we use a simple method to determine their kernel size and stride. Let and be the input and output sizes. Then we set the stride to and the kernel size to .

For the conditional 11 convolutional layer, we use a similar six-layer network to generate the weight matrix. The only difference is that the last fully connected layer will generate the weight matrix . For the actnorm and 11 convolutional conditional networks, the number of channels of the convolutional layers, i.e., , and the width of the fully connected layers, i.e., , will impact the model’s performance.

For the conditional affine layer, we use a three-layer conditional network to extract features from , and we concatenate it with . Among the three layers, the first and the last layers use kernels. The middle layer is a downscaling convolutional layer. We vary the number of channels of this conditional network to be , and we find that the model is not very sensitive to this variation. In our experiments, we fix it to have channels. The affine layer itself is composed of three convolutional layers with 256 channels.

We use the same multi-scale architecture as Glow to connect the layers, so the number of levels and the number of steps of each level will also impact the model’s performance. We use Adam [Kingma and Ba2014] to tune the learning rates, with , , and . We set the mini-batch size to be . Based on our empirical results, these settings allow the model to converge quickly. For the experiments on small datasets, i.e., semantic segmentation and image denoising, we run the program for iterations to guarantee the algorithms have fully converged. For the experiments on inpainting, the training set is large, so we run the program for iterations.

5.2 Binary Segmentation

In this set of experiments, we use the Weizmann Horse Image Database [Borenstein and Ullman2002], which contains images of horses and their segmentation masks indicating whether pixels are part of horses or not. The training set contains 200 images, and the test set contains 128 images. We compare c-Glow with DVN [Gygli, Norouzi, and Angelova2017], NLStruct [Graber, Meshi, and Schwing2018], and FCN111We use code from https://github.com/wkentaro/pytorch-fcn. [Long, Shelhamer, and Darrell2015]. Since the code for DVN and NLStruct is not available online, we reproduce results of DVN and NLStruct by \citeauthorgygli2017deep (\citeyeargygli2017deep), and \citeauthorgraber2018deep (\citeyeargraber2018deep). We use mean intersection-over-union (IOU) as the metric. We resize the images and masks to be , , and pixels. For c-Glow, we follow \citeauthorkingma2018glow (\citeyearkingma2018glow) to preprocess the masks; we copy each mask three times and tile them together, so has three channels. This transformation can improve the model performance. We set , , , and .

Image Size c-Glow FCN DVN NLStruct
0.812 0.558 0.840
0.852 0.701 0.752
0.858 0.795
Table 1: Binary segmentation results (IOU).

Table 1 lists the results. DVN only has result on images, and NLStruct only has result on images. The NLStruct is tested on a smaller test set with 66 images. In our experiments, we found that the smaller test set does not have significant impact on the IOUs. DVN and NLStruct are deep energy-based models. FCN is a feed-forward deep model specifically designed for semantic segmentation. Energy-based models outperform FCN, because they use energy functions to capture the dependencies among output labels. Specifically, DVN performs the best on images. The papers on DVN and NLStruct do not include results for large images. Thus, we only include small image results for DVN and NLStruct. In contrast, c-Glow can easily handle larger size structured prediction tasks, e.g., images. Even though c-Glow performs slightly worse than DVN on small images, it significantly outperforms FCN and NLStruct on larger images. The IOUs of c-Glow on larger images are also better than DVN on small images.

5.3 Multi-class Segmentation

In this set of experiments, we use the Labeled Faces in the Wild (LFW) dataset [Huang, Jain, and Learned-Miller2007, Kae et al.2013]. It contains 2,927 images of faces, which are segmented into three classes: face, hair, and background. We use the same training, validation, and test split as previous works [Kae et al.2013, Gygli, Norouzi, and Angelova2017], and super-pixel accuracy (SPA) as our metric. Since c-Glow predicts the pixel-wise label, we follow previous papers [Tsogkas et al.2015, Gygli, Norouzi, and Angelova2017] and use the most frequent label in a super-pixel as its class. We resize the images and masks to be , , and pixels. We compare our method with DVN and FCN. For c-Glow, we set , , , and . Note that comparing with binary segmentation experiments, we increase the model size by adding one more level. This is because the LFW dataset is larger and multi-class segmentation is more complicated.

Image Size c-Glow FCN DVN
0.914 0.745 0.924
0.931 0.792
0.945 0.951
Table 2: Multi-class segmentation results (SPA).

The results are in Table 2. On images, DVN performs the best, but c-Glow is comparable. C-glow performs better than FCN on images, but slightly worse than FCN on images. FCN performs well on large images, but worse than other methods on small images. We attribute this to two reasons. First, for small images, the input features do not contain enough information. The inferences of c-Glow and DVN combine the features as well as the dependencies among output labels to lead to better results. In contrast, FCN predicts each output independently, so it is not able to capture the relationship among output variables. On larger images, the higher resolution makes segmented regions wider in pixels [Long, Shelhamer, and Darrell2015, Gygli, Norouzi, and Angelova2017], so a feed-forward network that produces coarser and smooth predictions can perform well. C-Glow’s performance is stable. Whether on small images or large images, it is able to generate good quality results. Even though it is slightly worse than the best methods on and images, it significantly outperforms FCN on images. Moreover, c-Glow’s SPAs are better than DVN on small images.

5.4 Color Image Denoising

In this section, we conduct color image denoising on the BSDS500 dataset [Arbelaez et al.2010]. We train models on images and test them on the commonly used images [Roth and Black2009]. Following previous work [Schmidt and Roth2014], we crop a region for each image and resize it to . We then add Gaussian noise with standard deviation to each image. We use peak signal-to-noise ratio (PSNR) as our metric, where higher PSNR is better. We compare c-Glow with some state-of-the-art baselines, including BM3D [Dabov et al.2007], DnCNN [Zhang et al.2017], and McWNNM [Xu et al.2017]. DnCNN is a deep feed-forward model specifically designed for image denoising. BM3D and McWNNM are traditional non-deep models for image denoising. For c-Glow, we set , , , and . Let be the clean images and be the noisy images. To train the model, we follow \citeauthorzhang2017beyond (\citeyearzhang2017beyond) and use as the input and as the output. To denoise the images, we first predict and then compute .

c-Glow McWNNM BM3D DnCNN
27.61 25.58 28.21 28.53
Table 3: Color image denoising results (PSNR).

Noisy Image                    Ground Truth                       c-Glow                        DnCNN      .

Figure 3: Example qualitative results.

The PSNR comparisons are in Table 3. C-Glow produces reasonably good results. However, it is worse than DnCNN and BM3D. To further analyze c-Glow’s performance, we show qualitative results in Figure 3. One main reason the PSNR of c-Glow is lower than DnCNN is that the images generated by DnCNN are smoother than the images generated by c-Glow. We believe this is caused by one drawback of flow-based models. Flow-based models use squeeze layers to fold input tensors to exploit the local correlation structure of an image. The squeeze layers use a spatial pixel-wise checkerboard mask to split the input tensor, which may cause values of neighbor pixels to vary non-smoothly.

5.5 Denoising for Depth Refinement

In this set of experiments, we use the seven scenes dataset [Newcombe et al.2011], which contains noisy depth maps of natural scenes. The task is to denoise the depth maps. We use the same method as \citeauthorwang2016proximal (\citeyearwang2016proximal) to process the dataset. We train our model on images from the Chess scene and test on 5,500 images from other scenes. The images are randomly cropped to pixels. We use PSNR as the metric. We compare c-Glow with ProximalNet [Wang, Fidler, and Urtasun2016], FilterForest [Ryan Fanello et al.2014], and BM3D [Dabov et al.2007]. For c-Glow, the parameters are set to be , and . Note that we use smaller conditioning networks for this task, because the images for this task are one-dimensional grayscale images.

We list the metric scores in Table 4. ProximalNet is a deep energy-based structured prediction model, and FilterForest and BM3D are traditional filter-based models. ProximalNet works better than filter-based baselines, and c-Glow gets a slightly better PSNR.

c-Glow ProximalNet FilterForest BM3D
36.53 36.31 35.63 35.46
Table 4: Depth refinement scores (PSNR).

5.6 Image Inpainting

Inferring parts of images that are censored or occluded requires modeling of the structure of dependencies across pixels. In this set of experiments, we test c-Glow on the task of inpainting censored images from the CelebA dataset [Liu et al.2015], which has around 200,000 images of faces. We randomly select 2,000 images as our test set. We centrally crop the images and resize them to pixels. We use central block masks such that of the pixels are hidden from the input. For c-Glow, we set , and . For training the model, we set the features to be the occluded images and the labels to be the center region that needs to be inpainted. We compare our method with DCGAN inpainting (DCGANi) [Yeh et al.2017], which is the state-of-the-art deep model for image inpainting. We use PSNR as our metric.

c-Glow DCGANi-b DCGANi
24.88 23.65 22.73
Table 5: Image inpainting scores (PSNR). “DCGANi-b” represents DCGANi with Poisson blending.

Ground Truth        Corrupted Image           DCGANi             DCGANi-b             c-Glow

Figure 4: Sample results of c-Glow and DCGAN inpainting.

The PSNR scores are in Table 5. Figure 4 contains sample inpainting results. C-Glow outperforms DCGAN inpainting in both the PSNR scores and the quality of generated images. Note that the DCGAN inpainting method largely depends on postprocessing the images with Poisson blending, which can make the color of the inpainted region align with the surrounding pixels. However, the shapes of features like noses and eyes are still not well recovered. Even though the images inpainted by c-Glow are slightly darker than the original images, the shapes of features are well captured.

5.7 Discussion

We evaluated c-Glow on five different structured prediction tasks. Two tasks require discrete outputs (binary and multi-class segmentation) while the other three tasks require continuous variables. C-Glow works well on all the tasks and scores comparably to the best method for each task. We compare c-Glow with different baselines for each task, some specifically designed for that task and some that are general deep energy-based models. Our results show that c-Glow outperforms deep energy-based models on many tasks, e.g., scoring higher than DVN and NLStruct on binary segmentation. C-Glow also outperforms some deep models on some tasks, e.g., DCGAN inpainting. However, c-Glow’s generated images are not smooth enough, so its PSNR scores are slightly below DnCNN and BM3D for denoising. C-Glow handles these different tasks with the same CN architecture with only slight changes to the size of latent networks, demonstrating c-Glow to be a strong general-purpose model.

6 Conclusion

In this paper, we propose conditional generative flows (c-Glow), which are conditional generative models for structured output learning. The model allows the change-of-variables formula to transform conditional likelihood for high-dimensional variables. We show how to convert the Glow model to a conditional form by incorporating conditioning networks. In contrast with existing deep structured models, our model can train by directly maximizing exact likelihood, so it does not need surrogate objectives or approximate inference. With a learned model, we can efficiently draw conditional samples from the exact learned distribution. Our experiments test c-Glow on five structured prediction tasks, finding that c-Glow generates accurate conditional samples and has predictive abilities comparable to recent deep structured prediction approaches.

Acknowledgments

We thank NVIDIA’s GPU Grant Program and Amazon’s AWS Cloud Credits for Research program for their support.

References

Appendix A Experiment Details

In this section, we introduce more details of our experiments.

a.1 Network Architectures

Figure 5 illustrates the architectures of conditioning networks that we use in our experiments. For each layer except for the last layer, we use ReLU to activate the output. As in Glow, we use zero initialization for each layer. That is, we initialize the weights of each layer to be zero.

(a) Conditional Actnorm
(b) Conditional 1x1 Convolutional
(c) Conditional Affine
Figure 5: The networks we use to generate weights. The component “3x3 Conv-256” is a convolutional layer, the kernel size is , and the number of channels is . The component “FC-32” is a fully connected layer, and its width is . The parameter depends on other variable sizes. In the conditional affine layer, equals the number of channels of . In the conditional actnorm layer, , where is the size of the scale. In the conditional 1x1 convolutional layer, is a matrix, so . The “RConv” component is the convolutional layer for downscaling the input.

a.2 Conditional Likelihoods

To the best of our knowledge, c-Glow is the first deep structured prediction model whose exact likelihood is tractable. Figure 6 plots the evolution of minibatch negative log likelihoods during training. Since c-Glow learns a continuous density, the negative log likelihoods can become negative as the model better fits the data distribution.

(a) Binary Segmentation
(b) Multi-class Segmentation
(c) Color Image Denoising
(d) Depth Refinement
(e) Image Inpainting
Figure 6: Evolution of likelihoods.

a.3 Conditional Samples and Predictions

One important advantage of our model is the ability to easily generate high quality conditional samples. In this section, we show some conditional samples as well as prediction results in the following figures. Specifically, Figure 7 shows the binary segmentation results on the Horses dataset, Figure 8 shows the multi-class segmentation results on the LFW dataset, Figure 9 shows the denoised images on the BSDS dataset, Figure 10 shows the refined depth images on the seven scenes dataset, and Figure 11 shows the impainted images on the CelebA dataset. For each image except for Figure 9, we show conditional samples in the third and fourth rows. For the experiments of color image denoising, since the conditional samples are just noise added to images, we omit them in Figure 9.

For the experiments on semantic segmentation, i.e., Figure 7 and Figure 8, the conditional samples are continuous. However as shown in the figures, the generated continuous values are already close to integral. The conditional samples can reflect the divergence in predictions. For example, in the samples of horses, i.e., Figure 7, some sampled horses have different shapes of heads or tails, even though they are conditioned on the same input image. Similar phenomena can also be seen in Figure 8.

For image denoising and depth refinement, the outputs are continuous. As discussed in Section 5, the denoised images are not as smooth as those images obtained by feed-forward networks. However, in Figure 9 and Figure 10, we can see that c-Glow can remove a large amount of noise from the noisy images and perform reasonably well.

For image inpainting, the images inpainted by c-Glow are slightly inconsistent with the surrounding pixels in color, but they are nearly the same. The inpainted images can capture the shapes of features well, even though for faces with sunglasses, c-Glow can also recover the shape and color of the sunglasses. This is why c-Glow can outperform DCGANi. The conditional samples of inpainted images can also reflect the diversity of the model predictions. For example, for some samples of faces, parts such as the mouth and nose are the same, but the eyes stare in different directions.

(a)
(b)
(c)
Figure 7: Conditional samples and predictions on the Horses dataset. The first two rows are input images and ground truth labels, the third and fourth rows are conditional samples, and the last row is predicted labels.
(a)
(b)
(c)
Figure 8: Conditional samples and predictions on the LFW dataset. The first two rows are input images and ground truth labels, the third and fourth rows are conditional samples, and the last row is predicted labels.
(a)
(b)
(c)
Figure 9: Conditional samples and predictions on the BSDS dataset. From top to bottom: the noisy images, the clear images, and the denoised images.
(a)
(b)
(c)
Figure 10: Conditional samples and predictions on the seven scenes dataset. The first two rows are input images and ground truth labels, the third and fourth rows are conditional samples, and the last row is predicted labels.
(a)
(b)
(c)
Figure 11: Inpainting results on the CelebA dataset. The first row is the corrupted input. The second row is the ground truth labels. The third and fourth rows are conditional samples, and the last row is the inpainting results. We use sample average as the final prediction.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
398870
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description