Automatically Searching for U-Net Image Translator Architecture
Image translators have been successfully applied to many important low level image processing tasks. However, classical network architecture of image translator like U-Net, is borrowed from other vision tasks like biomedical image segmentation. This straightforward adaptation may not be optimal and could cause redundancy in the network structure. In this paper, we propose an automatic architecture searching method for image translator. By utilizing evolutionary algorithm, we investigate a more efficient network architecture which costs less computation resources and achieves better performance than the original one. Extensive qualitative and quantitative experiments are conducted to demonstrate the effectiveness of the proposed method. Moreover, we transplant the searched network architecture to other datasets which are not involved in the architecture searching procedure. Efficiency of the searched architecture on these datasets further demonstrates the generalization of the method.
Image-to-image translation has been a fundamental research field of computer vision. Image-to-image translation tasks such as domain translation , image super-resolution , image colorization and image denoising  are widely involved in image processing of common applications. To translate image from one domain to another,  predicts per-pixel color histogram as an intermediate output to help the translation procedure.  posts it as a classification task and uses class-rebalancing at training time to increase the reality of the results. The pix2pix method, firstly leverages generative adversarial networks in a conditional setting to solve the image-to-image translation issue.  suggest that adversarial training might be unstable and prone to failure for high-resolution image generation tasks. Instead, they adopt a modified perceptual loss to synthesize images, which are high resolution but often lack fine details and realistic textures.  presented a novel adversarial loss as well as a coarse-to-fine generator and a multi-scale discriminator to address the high-resolution image-to-image translation problem. These previous methods have made great progress and achieves good performance on image-to-image translation tasks. However, the network architectures become more complicated and consume more computation cost.
In addition, most of them studied the loss functions, weight normalization and training techniques, which never deal with the architectures of translator explicitly. The design of deep neural networks can have enormous influence on its performance. ResNet  achieved much higher accuracy than VGGNet  on image classification but required fewer parameters by introducing the residual blocks.  introduced deep convolutional generative adversarial networks which can generate better images. Although manually designed network architectures can achieve good performance in various tasks, recent progress have proved that architectures that built by automatically searching can outperform hand-craft structures.  exploited reinforcement learning to generate architectures which can achieve better performance than the human-invented networks with less parameters.  employed evolutionary algorithms to search the architecture of neural network.  proposed a more efficient method for learning the structure of CNNs utilizing a sequential model-based optimization strategy.  introduced a continuous relaxation to represent the architecture to allow efficient searching using gradient descent.
There are major differences between the visual recognition task and image translation task. First, generative model produced by an adversarial training process has to further deal with a discriminator model. While the traditional visual recognition only requests a discriminator. Second, the architecture of an image generator is different from that of an image classifier. Some generators take U-Net  as the basic architecture, which has a symmetrical structure and skip connections between the convolutional layers and deconvolutional layers. The searching space of U-net is fundamentally different from that of ordinary deep neural networks for classification. To this end, it is necessary to tailor a new neural architecture search method for generator in the image translation task.
In this paper, we propose a novel framework to automatically search the architecture of image translator utilizing the evolutionary algorithm. We use the U-net as the backbone and explore the channel numbers of each convolutional layer and deconvolutional layers as well as the essentiality of every skip connection in U-Net. We carefully design the genetic algorithm to apply it to the architecture search of U-Net. The network search is conducted on cityscapes dataset. Then, the searched architecture is transplanted to other benchmark dataset of image-to-image translation. Extensive experiments demonstrate the effectiveness of the proposed method.
Here we first briefly introduce the image-to-image translation task with generative adversarial networks and the genetic algorithm for automatically searching the network architecture.
2.1 Image-to-Image Translation
image-to-image translation can be taken as a per-pixel classification or regression problem . Recently,  used generative adversarial networks to solve this problem. In an image-to-image translation task, such as semantic map to street view on cityscapes dataset, the objective function within the framework of GANs can be written as
where represents the source domain (e.g. semantic map), and represents the target domain (e.g. street view). Denote the training sample in domain as , and the corresponding images in domain as . The generator transforms images from source domain to target domain (i.e. ), while the discriminator distinguishes which domain the input comes from. The task of domain transfer using generator and discriminator is formulated as a minimax problem. The generator is optimized to generated images to fool the discriminator . Whereas the discriminator is optimized to distinguish which domain the image come from.
In the task of paired image-to-image. translation , except for the regular loss for generator, and the conventional loss between fake images from the generator and real target images is often adopted, i.e.
Thus, the objective function for an image-to-image translator  can be formulated as
With the help of minimax design of GAN and loss, the generator can achieve satisfactory performance on image-to-image. translation task. However, the generator architecture U-net  is directly borrowed from biomedical image segmentation and may not be the optimal choice for this supervised task.
2.2 GA for Network Architecture Search
Neural architecture search has attracted much attention in the field of deep learning. There are two popular ways to do the search, evolutional algorithm and reinforcement learning. In this paper, we adopt evolutional algorithm to explore the architecture space as suggested in .
In Genetic Algorithm (GA) based network architecture search, we need to maintain a population of individuals, each of which represents a certain network architecture, to realize the searching process. The population in the current generation are regarded as parents, who breed next generation through three kinds of operations including selection, crossover and mutation, with the expectation that the subsequent offspring perform better than the parents. After enough number of generations, remaining offspring would suit better for the designed task. Each individual is assigned with a probability through a roulette algorithm according to its fitness:
where is the number of individuals in the population, is the -th individual for encoding a specific neural architecture, and is the fitness of . Then three operations will be executed with the probability of , and respectively, and .
Selection: Given probability , selection is conducted. An individual selected according to Fcn.4 is directly copied as an offspring. Individual with higher fitness has more chance to be preserved.
Crossover: Given probability , crossover is conducted. Two individuals from parent generation are selected according to Fcn.4. Random piece of parents will be swapped to generate two new offsprings. This operation is to integrate excellent genetic fragments of the parents.
Mutation: Given probability , mutation is conducted. One individual from parent generation is selected according to Fcn.4. Mutation randomly changes a random piece in the parent individual to produce an offspring. The common mutation operation for binary encoding is XOR operation. This operation increase the diversity of the population.
By iteratively employing these three genetic operations, initial population will be updated efficiently until the maximum iteration number is achieved. After obtaining the individual with optimal fitness, we can get a new architecture of network. The key points for applying the genetic algorithm to network architecture search is to design the representation code for each individual and the fitness which evaluates the performance of each individual of a specific task.
In this part, we will introduce the details of the proposed method including the search space, the representation of the individual architecture and how evolutional algorithm is applied during the search process. We use U-Net as the backbone to illustrate this part.
3.1 Search Space of Architecture
U-Net  was first proposed for biomedical image segmentation while widely used in image-to-image translation task. U-Net is similar to an encoder-decoder network composed of sequential convolutional layers and deconvolutional layers. Through the network, feature maps are empirically down sampled with more channels, and are up sampled to the original size in the reverse manner . Different from the conventional encoder-decoder structure, U-Net has skip connections between mirrored layers in the encoder and decoder stacks. In this work, we follow the encoder-decoder fashion, but explore channel numbers of each convolutional and deconvolutional layers as well as the essentiality of each skip connection. In addition, the sizes of filters in all convolutional layers are with a stride of for extracting visual features with considerable receptive fields and ensuring the consistency of the encoder-decoder architecture , which will not to be searched. Therefore, the search space of these configurations is defined as follows
where the set of available choices of channel numbers, which includes all the optional output channel numbers of each convolutional layers. Due to the symmetry of U-Net, we only have to operate on the first half of the network to determine the whole one. is the set of available choices of skip connection. is to keep a certain skip connection while is to remove the skip connection. For a U-Net generator used in , there are pairs of convolutional and deconvlutional layers with skip connections. The size of search space for this architecture is , which cannot be efficiently optimized by conventional methods.
3.2 Representation of Search Architecture
We use two fixed-length codes to represent each variant of U-Net. represent the code for output channel numbers, which defines output channel numbers of the first half of convolutional layers. represent the code for skip connection, which determine whether to keep or remove each skip connection. is the length of , and is the length of . To produce initial individuals, a bootstrap sampling from is repeated for times to generate a single channel number code and a bootstrap sampling from is repeated for times to generate a single connection code . Every pair represents a particular individual in the search space. Taking original U-Net for an example, and . The decoding process of the proposed method for constructing a new network is shown in Fig. 2. First step is to determine the pair of code , either from initial individual or genetic operations of selection, crossover and mutation. Then, a new network with a specific architecture will be established according to the channel number code . Finally, the numbers of input and output channels in each layer will be calculated based on the symmetric of U-Net and the connection code which indicates the remaining skip connections to formulate the resulting generator network.
3.3 Evolutionary Strategy
Usually, the generator and the discriminator in a conventional adversarial network are optimized alternatively, and the discriminator will be discarded after the training procedure. Therefore, we propose to utilize the genetic algorithm to update the population to get better architectures of the generator network, i.e. . The objective function for generator is formulated as
To evaluate the performance of each individual architecture in each generation among the population, we have to consider both the model performance and computation efficiency. After some mini-epoch training of each individual, we conduct a validation procedure to evaluate each individual architecture. Individual fitness is calculated as
where is the FLOPs required by the given architecture for processing each input image, and
where represents the cumulative loss of the generator during validation procedure, which reflects the image translation quality of the architecture. is composed of a generator loss and the loss with a hyper parameter . The hyper parameter is utilized for balancing the computation cost and model performance. A larger represents that the evaluation of individuals focuses more on reducing the computation cost. Note that both FLOPs and generator loss are negative correlated with the fitness of network architecture, so a reciprocal operation is added in the fitness function.
3.4 Search Procedure
We adopt the classic genetic algorithm to conduct network architecture search described as Alg. 1. First, we initialize the first generation of population using random sampling from and . For computation speed, we select subset of dataset for fast training and validation, and denote them as mini train set and mini validation set. Then, we apply every individual in the population for mini-train and mini-validation for calculating individual fitness utilizing Fcn .7. Then we can determine the probability for every individual to be selected. After that, we use three genetic operations, including selection, cross over and mutation to generate offsprings in the iteration procedure.
By iteratively updating a series of individuals in the population using the genetic operations for times, a new architecture of the generator network is discovered with a high probability to perform better than the original architecture in the image-to-image translation task. For an easier complementation, we adopt genetic algorithm separately for channel number code and connection code . For selection and cross over, regular strategy of genetic algorithm is adopted. As for mutation, we have to design special strategy to operate because the channel number code is not binary. Thus we cannot use XNOR operation directly for mutation. For mutation of channel number code , we design a unique mutation operation: for an element corresponding to a channel number to be mutated, we remove the past channel number from set and randomly select one from the remaining numbers as the new channel number.
|Input Images||Original Results|
In the image-to-image translation task, we evaluate the effectiveness of the proposed method to search the optimal architecture of an image generator. Extensive experiments are conducted on several paired image-to-image translator dataset including cityscapes, facades and maps. For a fair comparison, we use the same training strategy and same discriminator structure as pix2pix .
For parameters settings, we use , max generation and number of population is . Probabilities of are adopted for selection, cross over and mutation in each generation as suggested in . For the input image of and output image of , the length for channel number code is , and the length for skip connection code is . First, we conduct architecture search experiment on the cityscape dataset with different hyper parameters. Then, we immigrate the searched architecture to other image-to-image translation dataset like maps and facades which doesn’t get involved of the searching procedure to prove the generalization ability of the searched architecture.
|Input Images||Original Results||Proposed Method||Input Images||Original Results||Proposed method|
4.1 Search on Cityscapes Dataset
We conduct network architecture search experiments on cityscapes dataset, from semantic maps to street views. We use the described genetic algorithm to explore the defined network search space. In the experiments, we adopt different hyper parameters to balance model performance and computation cost. A smaller hyper parameter means less focus on the network FLOPs, resulting in more computation consumption and better image translation quality.
|Method||Memory||FLOPs||Mean Pixel Acc||Mean Class Acc.||Mean class IoU|
To further evaluate the effectiveness of the searching method, we conduct quantitative experiments to evaluate the performance of searched architecture. We adopt ”FCN-score” proposed in  to numerically evaluate the architecture. A pre-trained FCN-8s  network is utilized to conduct segmentation task. Three measurements, mean pixel accuracy, mean class accuracy and mean class IOU are calculated. The results are shown in Tab. 1. The FLOPs of the model decrease with the increasing of gamma but mean pixel accuracy, mean class accuracy and mean class IOU decrease as well. When , the network architecture can achieve higher FCN scores than that of the original model. The memory size of the model is only 22MB, almost 1/10 of the original model. when , we can get a model which costs less than half of the original model but achieves slightly lower scores. Actually, we can get a various generator architectures fit for different applications. In addition, we can conclude that the model performance is more related with computation FLOPs, rather than the memory size of the model. Interestingly, all of searched models have less parameters than that of the original model, which means that the searched architecture cost less memory storage while achieving comparable or even better performance than the original one. The parameter redundancy of original U-Net is very severe in the image-to-image translation task. In Fig. 3, we show some results of the image translation from semantic map to street views. With a lower , the number of FLOPs of model is larger, which leads to more computation cost but the better image translation quality.
4.2 Architecture Transfer to Other Datasets
After network search experiments on cityscapes dataset, we get new architectures of generator. To demonstrate the effectiveness of the searched architecture, we utilize the architecture to train other paired image-to-image tasks, such as maps to satellite maps and color bar labels to real facades. For a fair comparison, we use the architecture with which has less FLOPs compared to that of original U-Net to train on other image-to-image datasets, such as maps and facades. Fig. 4 show some results of the image transfer tasks of maps and facades respectively. From visual respect, newly searched network outperforms the original network architecture with less FLOPs and far less parameters. On maps dataset, the searched network generates satellite images more consistent with input maps. On facades dataset, the searched network generates images with more realistic windows and doors. It is worth noting that neither dataset of maps nor facades gets involved in the process of network architecture search. It demonstrates the generalization of the searched architecture and the effectiveness of the searching method furthermore.
4.3 Discussion of the Searched Architecture
Through the previous network architecture searching experiment, we get new architectures of generator. As shown in Fig. 5, we compare the new architecture with and original network architecture. These two architectures have comparable FLOPs and the new one has slightly less. In the searched architecture, we find it is not necessary to increase number of channels as network goes deeper. Only a few wide layers at specific positions is enough for good performance. Similar phenomenons can also be observed in the other two architectures in Tab. 1
In addition, not every connection is essential, the first and third skip connections are preserved while other skip connections are removed. This phenomenon shows that the skip connection involving layers with larger feature maps’ sizes are more essential in the image-to-image translation task. On the other hand, it also demonstrates that long distance connection is more important than the close connection, since far skip connection mixes less correlated information whereas the close skip connection consults more computation cost while involve less important information. Thus, most of the connections are removed.
As for the channel numbers of each convolutional layers and deconvolutional layers, the original U-Net has a similar encoder-decoder structure and gradually gets wider with continuously more convolutional layers. It reaches the widest channel numbers when feature map approaches the bottleneck. Then, the channel numbers of each subsequent deconvolutional layer is reducing accordingly. For the newly searched architecture, this principle seems to be broken. Shallow layers tend to have more channels while layers near the bottleneck tend to have less channels. More computation cost lies in the shallow layers. In the image-to-image translation task, more carefully designed shallow layers help to restore of details of the image.
In this paper, we propose a novel automatically network search method for paired image-to-image translator. By utilizing the evolutional algorithm, we search the channel numbers of each convolutional layers and deconvolutional layers and connections of the skip connections for a better generator architecture. We carefully design the genetic algorithm for the architecture search of U-Net. Extensive experiments with different hyper parameters are conducted to balance model performance and computation cost. Through experiments on cityscapes dataset, we get a high performance model with fewer FLOPs compared to the original backbone U-Net. The FCN scores of new architecture on cityscapes segmentation exceed the original U-Net. Furthermore, we immigrate this model architecture to other paired image-to-image translation tasks including maps and facades, and the new architecture outperforms the original one. Finally, we discuss the new architecture with the original one and this may suggest more experiences to the design of network architecture.
- (2018) Image blind denoising with generative adversarial network based noise modeling. In CVPR, pp. 3155–3164. Cited by: §1.
- (2017) Photographic image synthesis with cascaded refinement networks. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1511–1520. Cited by: §1.
- (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §1.
- (2017) Image-to-image translation with conditional adversarial networks. arXiv preprint. Cited by: §1, §2.1, §2.1, §3.1, §4.1, Table 1, §4.
- (2016) Perceptual losses for real-time style transfer and super-resolution. In European Conference on Computer Vision, pp. 694–711. Cited by: §1.
- (2016) Learning representations for automatic colorization. In European Conference on Computer Vision, pp. 577–593. Cited by: §1, §2.1.
- (2016) Learning representations for automatic colorization. In European Conference on Computer Vision, pp. 577–593. Cited by: §1.
- (2017) Photo-realistic single image super-resolution using a generative adversarial network.. In CVPR, pp. 4. Cited by: §1.
- (2018) Progressive neural architecture search. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 19–34. Cited by: §1.
- (2018) Darts: differentiable architecture search. arXiv preprint arXiv:1806.09055. Cited by: §1.
- (2015) Fully convolutional networks for semantic segmentation. In CVPR, Cited by: §4.1.
- (2015) Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434. Cited by: §1.
- (2018) Regularized evolution for image classifier architecture search. arXiv preprint arXiv:1802.01548. Cited by: §2.2.
- (2017) Large-scale evolution of image classifiers. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 2902–2911. Cited by: §1.
- (2015) U-net: convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pp. 234–241. Cited by: §1, §2.1, §3.1.
- (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §1.
- (2018) High-resolution image synthesis and semantic manipulation with conditional gans. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807. Cited by: §1.
- (2018) Towards evolutional compression. In SIGKDD, Cited by: §4.
- (2016) Colorful image colorization. In European conference on computer vision, pp. 649–666. Cited by: §1.
- (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv preprint. Cited by: §1.
- (2016) Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578. Cited by: §1.