Restricted Deformable Convolution basedRoad Scene Semantic SegmentationUsing Surround View Cameras

Restricted Deformable Convolution based
Road Scene Semantic Segmentation
Using Surround View Cameras

Liuyuan Deng, Ming Yang, Hao Li, Tianyi Li, Bing Hu, Chunxiang Wang This work is supported by the National Natural Science Foundation of China (U1764264), International Chair on automated driving of ground vehicle. Ming Yang is the corresponding author.L. Deng, M. Yang, T. Li, B. Hu are with Department of Automation, Shanghai Jiao Tong University, Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai, 200240, China (phone: +86-21-34204553; e-mail: MingYang@sjtu.edu.cn).H. Li is with SJTU-ParisTech Elite Institute of Technology and also with Department of Automation, Shanghai Jiao Tong University, Shanghai, 200240, China (e-mail: hao.li@sjtu.edu.cn).C. Wang is with Research Institute of Robotics, Shanghai Jiao Tong University, Shanghai 200240, China.
Abstract

Understanding the surrounding environment of the vehicle is still one of the challenges for autonomous driving. This paper addresses 360-degree road scene semantic segmentation using surround view cameras, which are widely equipped in existing production cars. First, in order to address large distortion problem in the fisheye images, Restricted Deformable Convolution (RDC) is proposed for semantic segmentation, which can effectively model geometric transformations by learning the shapes of convolutional filters conditioned on the input feature map. Second, in order to obtain a large-scale training set of surround view images, a novel method called zoom augmentation is proposed to transform conventional images to fisheye images. Finally, an RDC based semantic segmentation model is built. The model is trained for real-world surround view images through a multi-task learning architecture by combining real-world images with transformed images. Experiments demonstrate the effectiveness of the RDC to handle images with large distortions, and the proposed approach shows a good performance using surround view cameras with the help of the transformed images.

Deformable convolution, semantic segmentation, road scene understanding, surround view cameras, multi-task learning.

I Introduction

An autonomous vehicle needs to perceive and understand its surroundings (such as road users, frees-pace, and other road scene semantics) for decision making, path planning, etc. Vision-based approaches have become increasingly mature and practical for autonomous driving. Semantic segmentation takes an important step towards visual scene understanding by parsing an image into different regions with specific semantic categories, such as pedestrians, vehicles, and road. In recent years, road scene understanding has achieved a huge progress, thanks to the methodology of Convolutional Neural Network (CNN) based semantic segmentation using narrow-angle or even wide-angle conventional cameras[1]. Conventional cameras follow well the pinhole camera model: all straight lines in the real world are projected as straight lines in the image. However, a limitation of conventional cameras is their incapability of capturing ultra wide-angle landscapes. In order to enable a vehicle to perceive the surrounding environment, the works presented in this paper explore the CNN based road scene semantic segmentation using surrounding view cameras.

Surround view systems are widely applied in vehicles to provide drivers with surround view of the environment. A surround view system normally consists of four to six fisheye cameras mounted around a vehicle. Each fisheye camera theoretically has a field of view (FOV) of . Despite the advantage of an ultra wide FOV, a fisheye camera causes strong distortion to captured images, which brings difficulties in image processing. Thus, fisheye camera images are usually undistorted first in practical usage[2, 3]. However, image undistortion hurts image quality (especially at image boundaries)[4] and leads to information loss. On the other hand, the segmented results on raw images can be broadly used as a source for various tasks. One example is shown in Fig. 10. This paper explores CNN based semantic segmentation on raw surround view images, as illustrated in Fig. 1.

Fig. 1: Illustration of CNN based semantic segmentation on raw surround view images. Surround view cameras consist of four fisheye cameras mounted on each side of the vehicle. Cameras in different dirrections capture images with different image composition.

Two challenging aspects are considered. The first is an effective deep learning model to handle fisheye images. Fisheye images have severe distortion which is unavoidable during the process of projecting an image of a hemispheric field onto a plane[5]. The degree of distortion is related to the distance between the camera and the objects, and also to the radial angle. The distortions are not uniform over all spatial areas[4]. This brings CNN models the demand for modeling large and unknown transformations. CNNs already have shown a remarkable representing ability with the help of large-scale datasets which contain diverse scenes. The ability largely originates from the large capacity of deep models like VGGNet[6], GoogleNet[7] and ResNet[8]. Besides, handcrafted structures, for example pyramid pooling module[9], also contribute to the representational power. However, as discussed in[10], regular CNNs are inherently limited to model large, unknown geometric distortions, due to their fixed structures, such as fixed filter kernels, fixed receptive field sizes, and fixed pooling kernels.

The second is about training datasets for the deep neural networks. So far, state-of-the-art CNN-based semantic segmentation methods require large-scale pixel-level annotated images for parameter optimization. The annotating process is a time-consuming and expensive work, yet several road scene datasets have already been created[11, 12] and contribute to the development of semantic segmentation algorithms. But there are few large-scale annotated datasets of semantic segmentation for surround view cameras. In our previous works[13], a fisheye dataset is generated from Cityscapes dataset for a forward-looking conventional camera. That’s still not enough for surround view cameras. First, the image composition of cameras in different directions varies a lot. For example, as shown in Fig. 1, a forward-looking camera usually captures the rear view of front vehicles, but a sideways-looking camera captures the side view of surround vehicles. Second, the Cityscapes dataset is collected from cities in Europe; the model trained using such dataset may not be suitable for applications in regions outside Europe.

This paper is a considerable extension of our previous conference publication[13]. We further address the method of road scene semantic segmentation using surround view cameras. A more effective module is proposed to handle images with large distortions. Datasets for surround view image semantic segmentation are augmented by using the zoom augmentation method. This method is originally proposed in[13] to augment the training data with a randomly changing focal length. In this paper, we redefined zoom augmentation as the operation of transforming existing conventional images to fisheye images and implemented a zoom augmentation layer with a CUDA implementation for on-line training. Moreover, we successfully realized road scene semantic segmentation using surround view cameras. First, our proposed method exploits the deformable convolution[10] to handle fisheye images. To address the spatial correspondence problem[14], the Restricted Deformable Convolution (RDC) is proposed to further restrict deformable convolution for pixel-wise prediction tasks. Second, a variety of images are used to adapt models to local environment; these images include those transformed from Cityscapes and SYNTHIA-Seqs via the zoom augmentation method, and some real-world surround view images of our local environment. Finally, a multi-task learning architecture is presented to train an end-to-end semantic segmentation model for real-world surround view images by combining a small number of real-world images and a large number of transformed images. The AdaBN[15] is adopted to bridge the distributional gap between real-world images and transformed images. In addition, the Hybrid Loss Weightings (HLW) is proposed to improve the generalization ability by introducing auxiliary losses with different loss weightings.

This paper is organized as follows: Section II reviews related works. Section III introduces the RDC, whereas Section IV describes the method of converting the existing datasets to fisheye datasets. Section V presents the training strategy. And Section VI demonstrates quantitative experiments.

Ii Related Work

Early semantic segmentation methods rely on handcrafted features; they use Random Decision Forest[16] or Boosting[17] to predict the class probabilities and use probabilistic models known as Conditional Random Fields (CRFs) to handle uncertainties and propagate contextual information across the image. In recent years, CNNs have made a huge step forward in vision recognition thanks to large-scale training datasets and high-performance Graphics Processing Unit (GPU). In addition, excellent open source deep learning frameworks like Caffe, MXNet and Tensorflow boost the development of algorithms. Powerful deep neural networks[6, 7, 8] emerged largely reducing the classification errors on ImageNet[18], which is also beneficial to semantic segmentation. FCN[19] successfully improved the accuracy of semantic segmentation by adapting classification networks into fully convolutional networks.

For the task of semantic segmentation, it’s crucial to incorporate context information in relevant image regions when making a prediction. A broad receptive field is usually desirable to capture the entire useful information. The receptive field size can be increased multiplicatively by down-sampling operation and linearly by stacking more layers. After the down-sampling operation, lots of low-level visual features are lost and the spatial structure of the scene is prone to be damaged. Dilated convolution or Atrous convolution[20, 21] is proposed to alleviate this problem by enlarging the receptive field without reducing the spatial resolution. It enlarges the kernel size by introducing “holes” in convolution filters without increasing the number of parameters. Note that the dilation rates should be carefully designed to alleviate gridding artifacts[22]. On the other hand, modern nets like ResNet[8] theoretically have a large receptive field, even larger than the input image, due to the significantly increased depth. However, as investigated in[23], the effective receptive field of a network is much smaller than the theoretical one. Instead of hand-crafted designing modules, the deformable convolution[10] learns the shapes of convolution filters conditioned on an input feature map. The receptive field and the spatial sampling locations are adapted according to the objects’ scale and shape. It is shown that it’s feasible and effective to learn geometric transformation in CNNs for vision tasks. However, as indicated in[14], the deformable convolution does not address the spatial correspondence problem which is critical in dense prediction tasks. The DTN[14] preserves the spatial correspondence of spatial transformer layers between the input and output and uses a corresponding decoder layer to restore the correspondence. However, the DTN learns a global parametric transformation, which is limited to model non-uniform geometric transformations for each location.

Some datasets for semantic road scene understanding have been created, for example, CamVid[11], Cityscapes[12], and Mapillary Vistas[24]. Cityscapes is a large-scale dataset for semantic urban scene understanding with 5000 finely annotated images. The images are captured from Europe using forward-looking conventional cameras. Data annotation as well as data collection is time consuming and expensive. Therefore, another increasingly popular way to overcome the lack of large-scale dataset is explored by the usage of synthetic data, such as SYNTHIA[25], Virtual KITTI[26], and GTA-V[27]. Synthetic data is usually used to augment real training data[25, 28]. SYNTHIA is generated by rendering a virtual city created with the Unity development platform for semantic segmentation of driving scenes. Our previous works[13] can also be regarded as a synthetic dataset which is transformed from a real large-scale conventional image dataset. None of these datasets is created using surround view cameras.

In order to achieve higher accuracy of semantic segmentation, the top performing networks based on very large models can be used, e.g., [9, 22]; however, such methods suffer from large computational costs. As a matter of fact, autonomous driving features multitasking with limited resource. The long inference times and large power consumption make them difficult to be employed in on-road applications. On the other hand, some works[29, 30, 31] explore a good tradeoff between accuracy and efficiency, so that it is feasible on embedded devices. ERFNet[29] achieved an excellent tradeoff by applying factorized convolutions[32]. And it can be trained from scratch. This paper takes ERFNet as the baseline model for efficient semantic segmentation.

(a)
(b)
(c)
(d)
Fig. 2: The sampling locations of 3x3 convolutions: (a) Standard convolution. (b) Dilated convolution with dilation 2. (c) Deformable convolution. (d) Restricted deformable convolution. The dark points are the actual sampling locations, and the hollow circles in (c) and (d) are the initial sampling locations. (a) and (b) employ a fixed grid of sampling locations. (c) and (d) augment the sampling locations with learned 2D offsets (red arrows). The primary difference between (c) and (d) is that restricted deformable convolution employs a fixed central sampling location. No offsets are need to be learned for the central sampling location in (d).

Iii Restricted Deformable Convolution

In this section, we describe the RDC which is the restricted version of deformable convolution. And a factorized version of RDC is also provided.

The regular convolution adopts a fixed filter with grid sampling locations, as shown in Fig. (a)a and Fig. (b)b. The shape of a regular grid is a rectangle, for example, as shown in Fig. (b)b, a filter with dilation 2 is defined as:

The deformable convolution adds 2D offsets to the grid sampling locations, as shown in Fig. (c)c. Thus, each sampling location is learnable and dynamic.

In a deep CNN, the upper layers encode high-level semantic information with weak spatial information, including object- or category-level evidence. Features from the middle layers are expected to describe middle-level representations for object parts and retain spatial information. Features from the lower convolution layers encode low-level spatial visual information like edges, corners, circles, etc. The middle layers and lower layers are responsible for learning the spatial structures. If the deformable convolution is applied to the lower or middle layers, the spatial structures are susceptible to fluctuation. The spatial correspondence between input images and output label maps is difficult to be preserved. This is the spatial correspondence problem indicated in[14], which is critical in pixel-wise semantic segmentation. Hence, the deformable convolution is only applied to the last few convolution layers, as in the works presented in[10].

In this paper, a straightforward way is adopted to alleviate this problem. As illustrated in Fig. (d)d, we freeze the central location of the filter and let the outer locations be learnable, considering that the ability of modeling transformations heavily depends on the outer sampling locations. This variant of deformable convolution is called Restricted Deformable Convolution (RDC), as shown in Fig. 3. The RDC is first initialized with the shape of a filter of regular convolution. Then 2D offsets are learned by a regular convolutional layer to augment the regular grid locations except the center. The shape of the filter is deformable and learned from the input features. The RDC can be included into a standard neural network architecture to enhance the ability of modelling geometric transformations.

Iii-a Formulation

The convolution operator slides a filter or kernel over the input feature map to produce output feature map . For each sliding position , a regular convolution with filter weights , bias term and stride 1 can be formulated as

(1)

where is the index of input channel, is the base position of the convolution, with and enumerates the locations in the regular grid . The center of is denoted as which is always equal to , under the assumption that both of height and width of the kernel are odd numbers, such as , and . This assumption is suitable for most CNNs. is the index of the central location in .

The deformable convolution augments all the sampling locations with learned offsets . Each offset has a horizontal component and a vertical component. Totally offset parameters are required to learn for each sliding position. Equation (1) becomes

(2)

where is the learned sampling position on input feature map. The input channel in (1) is omitted in (2) for notation clarity, because the same operation is applied in every channel.

In order to preserve the spatial structure, we restrict the deformable convolution by fixing its central location. That is to say, the offset is set as . The center of , , is also equal to , thus the learned position is formulated as

The RDC can also be formulated by

where and are horizontal and vertical components of . The first term of the formula calculates the weighted value for the fixed central location. The second term calculates the weighted sum for the learned outer locations. The learned outer positions are not integer numbers, because the offsets are real numbers. Bilinear interpolation is used to sample over the input feature map for a fractional position. For the outer locations, the sampled output is formulated by

where

denotes the values of four nearest integer positions on the input feature map . This bilinear interpolation operation is differentiable, as explained in[10].

As illustrated in Fig. 3, like the deformable convolution, the offsets in the RDC are learned with a convolutional layer and from the same input feature map. The spatial resolution of the output offset fields is identical to that of the output feature map. Therefore, for each sliding position, a specific shape of the filter is learned. offset parameters are required to define the new shape, whereas no parameter is required to define the offset of the central location.

Fig. 3: A restricted deformable convolution. The module is initiated with a filter with dilation (the hollow circles on the input feature map). Offset fields are learned from input feature map by a regular convolutional layer. The channel dimension corresponds 2D offsets (the red arrows). Here, . The actual sampling positions (dark points) are obtained by adding the 2D offsets to the initial locations. The value of the new position is obtained by using bilinear interpolation to weight the four nearest points. The yellow arrows denote the backpropagation paths of gradients.

The whole module is differentiable. It can be trained with the standard backpropagation method, allowing for end-to-end training of the models they are injected in. As illustrated in Fig 3, the gradients are passed to filter weights, and also backpropagated to the offsets and input feature map through the bilinear operation.

Iii-B Factorized Restricted Deformable Convolution

(a)
(b)
(c)
Fig. 4: (a) regular convolution. (b) Factorized convolutions. (c) Factorized restricted deformable convolution. The non-linearities in (b) and (c) are omitted in this illustration. The dark points, hollow circles and red arrows are the same as the definitions in Fig 2.

2D filters can be approximated as a combination of 1D filters, for the sake of reducing memory and computational cost. In[32], a basic decomposed layer consists of vertical kernels followed by horizontal ones, and a non-linearity is inserted in between 1D convolutions. For example, a convolutional layer of (Fig. (a)a) can be decomposed into two consecutive factorized convolutional layers of and (Fig. (b)b). The ERFNet[29] has shown a good tradeoff between efficiency and accuracy with factorized convolutions. This paper also provides a factorized version of RDC.

Fig. 5: Zoom augmentation results. The left are the original color image and annotation. The right are the transformed images and annotations by zoom augmentation with a focal length changing from to .

For 2D RDC, each learned offset has two components: vertical direction and horizontal direction. With 2D kernel decomposed into a vertical kernel and a horizontal kernel, the offsets can also be decomposed into two components of the same directions, as shown in Fig. (c)c. In 1D kernels, only one parameter is learned to control the dilation factor for each outer location. This is called Factorized Restricted Deformable Convolution (FRDC). It can also be interpreted as that the dilations in 1D kernels are adaptively learned. The number of additional parameters for FRDC is , only a half of that of the RDC. That means FRDC is less flexible than RDC. FRDC shows a good performance for conventional images in the experiments.

Iv The Generation of Fisheye Image Dataset for Semantic Segmentation

Few large-scale datasets are available worldwide for fisheye image semantic segmentation. To enrich such datasets, a transformation method is proposed to convert conventional images to fisheye images. A mapping is built from the fisheye image plane to the conventional image plane. Thus the scene in conventional image can be remapped into fisheye image.

Iv-a Mapping Conventional Image to Fisheye Image

A conventional image is captured from a pinhole camera. The perspective projection of a pinhole camera model can be described by (3); for fisheye cameras, perhaps the most common model is the equidistance projection[33], as in (4).

(3)
(4)

where is the angle between the principal axis and the incoming ray, is the distance between the image point and the principal point, and is the focal length. Both a conventional image and a fisheye image can be treated as a hemisphere image projected onto a plane according to different projection models and from different view angles. The details of the geometrical imaging model are described by[5, 33]. With the settings that the focal lengths of the perspective projection and the equidistance projection are identical and the max viewing angle is equal to . The mapping from the fisheye image point to the conventional image point is described by

(5)

where denotes the distance between the image point and the principal point in the conventional image, and correspondingly denotes the distance between the image point and the principal point in the fisheye image.

The mapping relationship (5) is determined by the focal length . A base focal length can be set, thus the fisheye camera model approximately covers a hemispherical field. Each image and its corresponding annotation in the existing segmentation dataset are transformed using the same mapping function to generate the fisheye image dataset.

Iv-B Zoom Augmentation for Fisheye Image

Training of deep networks requires a huge number of training images, but training datasets are always limited. Data argumentation methods are adopted to enlarge training data using label-preserving transformations. Many forms are employed to do data augmentation for semantic segmentation, such as horizontally flipping, scaling, rotation, cropping and color jittering. Among them, scaling (zoom-in/zoom-out) is one of the most effective forms. DeepLab[21] augmented training data by random scaling the input images (from to ). PSPNet[9] adopted random resize between and combining with other augmentation methods. Conventionally, scaling means the operation of changing the image’s size. On the other hand, scaling the image can also be reasonably treated as the action of changing the focal length of the camera.

Following this idea, is proposed in[13] a new data augmentation method called zoom augmentation which is specially designed for fisheye images. Instead of simply resizing the image, the zoom augmentation means augmenting training dataset with additional data that is derived from existing source by changing the focal length of the fisheye camera. Zoom augmentation adopts the mapping function in (5).

Here, this operation of warping existing conventional images to fisheye-style images is generally called zoom augmentation. A fixed focal length can be applied for zoom augmentation. A randomly changing focal length within a specified range can also be applied to generate fisheye images with different degrees of distortion. Via the zoom augmentation method, an existing conventional image dataset for semantic segmentation can be transformed into a fisheye-style image dataset. Fig. 5 illustrates how the focal length affects the mapping results. The smaller the focal length, the larger the degree of distortions. Thus, we can get fisheye images with different distortions by randomly changing the focal length.

Fig. 6: The multi-task learning architecture for road scene semantic segmentation. Cityscapes, and SYNTIA-Seqs are transformed via zoom augmentation layer. Layers in the green block share their weights while the classifiers do not. The total loss is the weighted sum of main losses and auxiliary losses.

V Training Strategy

In this section, we introduce the strategy of training the CNN model to improve semantic segmentation accuracy on real-world surround view images with the help of transformed images. It is rarely practical to train the model using the transformed datasets and then use it to handle real-world images, due to different label spaces (Not all the target categories are the same as those of the source) and domain shift[15] (different datasets). A simple way is to use a real-world dataset to fine-tune the CNN model pre-trained on transformed datasets. However, overfitting occurs when the amount of real-world images is limited. This paper uses both transformed images and real-world images to train the model. As shown in Fig. 6, a multi-task learning architecture is built to train the model on datasets with different label spaces. The ERFNet is adopted as the basic model. The last deconvolution layer in the ERFNet serves as a classifier. The same model with bounded weights is used to train both the source (transformed images) and the target (real-world images) domains. Two approaches are used to handle the domain shift and improve the generalization ability.

V-a Sharing Weights with Private Batch Normalization Statistics

As illustrated in Fig. 6, all the weights except those of classifiers are shared to learn domain-invariant features. Domain related knowledge is heavily related to the statistics of the Batch Normalization (BN) layer. In order to learn domain-invariant features, it’s better for each domain to keep its own BN statistics in each layer. This paper uses an effective way for domain adaptation by sharing the weights but computing BN Statistics for each domain. This is called AdaBN in[15].

V-B Hybrid Loss Weightings

During training, the loss function is the weighted sum of softmax losses of the three tasks, as in (6). is the loss for real-world images. and are the losses for transformed images. The main loss weighting is used to balance the contribution of the different losses. The losses for transformed images act as a regularization term controlled by . A smaller can make the training focus on the real data. A too small incurs model overfitting, whereas a too large has a consequence that the loss for real data will be overwhelmed by the regularization loss.

In order to further balance the contribution of different losses, auxiliary losses are introduced. The auxiliary loss is formulated by (7). is the auxiliary loss for real-world images. and are the auxiliary losses for transformed images. The auxiliary loss weighting is used to balance the contribution of the different auxiliary losses. During training, the weighted auxiliary loss is added to the total loss with a discount weight ( is set to 0.2 in this paper), as formulated by (8).

(6)
(7)
(8)

In this paper, we use different weightings for the main loss and the auxiliary loss . That means, does not have to equal to . A bigger weighting can introduce stronger regularization. Thus with the bigger for auxiliary loss, a smaller for main loss can be employed to balance the contribution of different losses. This method is termed as Hybrid Loss Weightings (HLW).

Vi Experiments

In this section, the datasets used for the experiments are first introduced. Then the RDC based model is evaluated on conventional and transformed fisheye datasets, respectively. At the end, experiments for road scene semantic segmentation using surround view cameras are conducted. A platform with two NVIDIA GTX 1080Ti GPUs is used to train and evaluate the models using MXNet. The semantic segmentation performance is measured by the standard metric of mean Intersection-Over-Union (mIOU).

Vi-a Datasets for Surround View Image Semantic Segmentation

Two complementary datasets Cityscapes[12] and SYNTHIA-Seqs[25] are used to augment surround view datasets via the zoom augmentation method. Cityscapes is a real large-scale dataset captured by a forward-looking conventional camera. Its training set of images is used in the experiments. SYNTHIA-Seqs are captured in a virtual city using four conventional cameras of different directions. Specifically, the sub-sequences: Spring, Summer and Fall of SEQS-01, SEQS-02 and SEQS-04 are used, totally containing 34696 images. Both the real and synthetic datasets are transformed to augment surround view images. The resolutions of the transformed images after applying zoom augmentation are set to and for Cityscapes and SYNTHIA-Seqs respectively. In order to employ zoom augmentation with a randomly changing focal length for the training, this paper implements a zoom augmentation layer. This new layer adopts a CUDA implementation, thus the image can be transformed online. The time consumption of the layer is very small. Some examples transformed by the zoom augmentation method are illustrated in Fig. 7.

Besides, surround view images are captured by four fisheye cameras mounted around a moving vehicle and annotated using the Cityscapes annotation tool[12]. Of these, images are used for training and images are used for validation. The defined classes are listed in Fig. (e)e. Each image and annotation have a resolution of . Some examples are shown in Fig. 11.

Fig. 7: Transformed images by zoom augmentation with a randomly changing focal length. Images in the first row are from Cityscapes. Images in the second row are from SYNTHIA-Seqs. The first row captures the scene of the front view. The second row captures scene with different perspectives.

Vi-B Evaluation for the Restricted Deformable Convolution

The ERFNet is reimplemented in MXNet as the baseline model with a few differences. Batch normalization layer is applied after each convolutional layer and all the deconvolution layers use a kernel of and stride . A RDCNet model is built by replacing the first two convolutional layers in non-bt-1D blocks (Fig. (a)a) of the ERFNet with RDC layers. RDCNet- denotes only the last non-bt-1D blocks in the encoder that are reconstructed as shown in Fig. (c)c. Similarly, a DCNet- model is built using deformable convolution layers, and a FRDCNet- model is built using FRDC layers.

(a)
(b)
(c)
Fig. 8: RDC layers are applied in the last non-bt-1D blocks in the encoder of RDCNet-. (a) Non-bt-1D block in ERFNet. (b) Reconstructed non-bt-1D block in RDCNet. The first two convolutional layers are replaced with RDC layers. (c) The encoder of RDCNet-.

All these models are trained from scratch in MxNet using Nesterov Accelerated Gradient (NAG) with a mini-batch of , momentum of and weight decay of . The initial learning rate is set to . Class balancing is not applied in the experiments. Instead, the softmax loss is multiplied by to balance the regularization. Following the practice suggested in[29], we first train the encoder and then attach the decoder to jointly train the full network. For the ERFNet, both the encoder and joint model are trained for about epochs and the learning rate is decayed by every epochs. For other models, the “poly” learning rate policy (the learning rate is multiplied by ) is adopted to speed up the training. The power is set to . The encoder is trained for epochs and the joint model is trained for epochs. During training of the encoder, weights of the convolutional layer for offset learning are initialized to zero, yet other layers are initialized by the Xavier method. Unlike the training in[10], the encoder is not initiated with a pre-rained model. In order to stabilize the training, the offsets are kept unchanged in the first epochs. Thus a fixed conv shape is employed to warm up the training. Besides, the learning rates for offset learning are set to and times the base learning rate to train the encoder and joint model, respectively.

To analyze the performances of the ERFNet, the DCNet, the RDCNet, and the FRDCNet for conventional images and fisheye images, they were evaluated on the Cityscapes dataset and the Fisheye-Cityscapes dataset which was generated by the zoom augmentation method with a fixed focal length of . The performances with different numbers of reconstructed non-bt-1D blocks are shown in Fig. (a)a and Fig. (b)b. The DCNet, RDCNet, and FRDCNet are all better than the baseline model on both datasets. As the number of reconstructed blocks increases, the performances of these models first improve and then become saturated.

As shown in Fig. (a)a, the models are saturated when 4 reconstructed blocks are applied on the conventional image dataset. The RDCNet and FRDCNet achieved better performances than the DCNet. The FRDCNet-4 achieved the best performance and outperformed the baseline model by mIOU. But on Fisheye-Cityscapes dataset, as shown in Fig. (b)b, FRDCNet is the first to saturate when blocks are applied. RDCNet-8 achieved the best performance and outperformed the baseline by mIOU.

The FRDCNet shows a poorer performance on Fisheye-Cityscapes dataset despite a better performance on Cityscapes dataset. It indicates that the models require a greater geometric transformation modeling capability to handle fisheye images. RDCNet and DCNet essentially possess a more powerful ability of modeling geometric transformations, because there are no constraints for the outer sampling locations of RDC and deformable convolution layers. As shown in Fig. (b)b, on Fisheye-Cityscapes dataset, the DCNet achieved a better performance than RDCNet when one reconstructed blocks are applied, but with the increase of the reconstructed blocks, RDCNet surpassed DCNet. The experimental results show that the DCNet has a better geometric transformation modeling capability, but RDCNet is less prone to saturation than DCNet when more reconstructed blocks are used. That indicates fixing the central sampling location of RDC layer is effective for semantic segmentation. RDCNet-8 achieved the best score on Fisheye-Cityscapes dataset, which is adopted in the next section.

(a) Comparison on Cityscapes dataset.
(b) Comparison on Fisheye-Cityscapes dataset.
Fig. 9: Evaluating ERFNet (baseline), DCNet, RDCNet, and FRDCNet on different datasets.

Vi-C Semantic Segmentation Using Surround View Cameras

The multi-task learning architecture is shown in Fig. 6. The RDCNet-8 is adopted as the basic model. The net is trained with the training set of the real-world surround view images, Cityscapes, and SYNTHIA-Seqs. The weights except those of the classifiers are shared among all the tasks. The training procedure follows the RDCNet described in the previous section. 50K iterations are employed for training encoder and the joint model, respectively. In each iteration, four images are drawn from the three datasets to generate a mini-batch of samples. The conventional images of Cityscapes and SYNTHIA-Seqs are transformed to fisheye images online through the zoom augmentation layer.

The zoom augmentation method can adopt a fixed focal length or a randomly changing focal length. For the fixed mode, the focal length is set to and for Cityscapes and SYNTHIA-Seqs. For random mode, the focal length is changed randomly between and 800. When applied the AdaBN, the BN statistics are not shared and each BN layer computed BN statistics for each domain. The auxiliary branch is a convolutional layer with a kernel of , stride 1, and output channels. Batch Normalization and ReLU are applied after this layer. Main loss weighting and auxiliary loss weighting are set to balance the contribution of real samples and transformed samples. A strategy of HLW that employs different loss weightings is applied. That means main loss weighting does not have to be equal to auxiliary loss weighting . We evaluated how zoom augmentation, AdaBN and HLW affects performance on the validation set of the real-world images.

{tabu}

to 1.0X[c]—X[2,c]—X[2,c]—X[2,c]—X[1,c] AdaBn & Zoom augmentation & Main loss weighting & Auxilary loss weighting & mIOU
& Fixed & & & 63.1
& Fixed & & & 72.1
& Random & & & 72.7
& Random & & & 73.2
& Random & & & 73.5
& Random & & & 74.3
& Random & & & 73.7
& Random & & & 72.7
& Random & & & 73.4
& Random & & & 74.3
& Random & & & 73.6

TABLE I: Evaluation on the Validation Set of the Real-World Images Using RDCNet-8
Fig. 10: The bird’s eye view image semantic segmentation by mapping segmentation results of raw surround view images to bird’s eye view plane.
(a) Front view
(b) Rear view
(c) Left view
(d) Right view
(e) List of 18 classes names and their corresponding colors used for labeling. The void class is marked as black. The vertical separator means a standing structure that used to separate areas, such as walls, fences and guard rails. Road markings which painted on the road surface are used to convey traffic information, usually including white or yellow lines or patterns. We believe this class is beneficial to solutions that use surround view cameras. Other classes adopt the same definitions as those in Cityscapes. Other unclear or ignored objects are assigned a void label, e.g., the reverse sides of traffic signs, commercial signs, electric wires and the invalid boundaries of the images.
Fig. 11: Examples of RDCNet-8 results on the validation set of the real-world images. The results of front, rear, left and right view are displayed in (a)(b)(c)(d), respectively. The first column is raw image, the second column is ground truth, and the third column is the result produced by RDCNet-8. The color code is listed in (e).

The evaluated results are reported in Table I. Adopting the AdaBN largely improves the performance by . Zoom augmentation with randomly changing focal length brings extra improvement than fixed focal length. Training images with different degrees of distortion can improve the generalization ability of the model. Employing auxiliary losses is beneficial (about improvement with ). When decreasing with equal to , the model shows performance degradation. The degradation is caused by overfitting due to the weak regularization. When setting a smaller and a bigger , the performance shows a significant improvement. With and or and , RDCNet-8 achieved a performance of . That indicates HLW is an effective way to reduce overfitting. Fig. 11 illustrates some results produced by RDCNet-8 with and . We can see the proposed method performs well on the raw surround view images with large distortions and different perspectives.

The way of fine-tuning a pre-trained model using the real-world images was also tested, which resulted in a bad performance. Training the net with conventional image datasets instead of the datasets transformed by zoom augmentation can also improve the performance, but not as good as transformed datasets. The road scene semantic segmentation results on raw surround view images can be used as a source for other tasks. For example, in order to get semantic segmentation results on a bird’s eye view image, we can first compute semantic segmentation results on raw surround view images and then map the results to the bird’s eye view plane using Inverse Perspective Mapping (IPM), as illustrated in Fig. 10.

Table II reports the forward pass time of ERFNet and RDCNet-8 on a single GTX 1080Ti. RDCNet-8 remains efficient, taking ms more than the ERFNet that can run at several FPS on an embedded GPU[29]. The times reported in Table II include data transfer time from CPU to GPU and the processing time on the GPU, but do not cover the preprocessing time on the CPU and data transfer time from GPU to CPU.

Model net.forward (s)
ERFNet
RDCNet
TABLE II: Forward Pass Time for A Image

Vii Conclusion

This paper provides a solution for CNN-based surrounding environment perception using surround view cameras. First, the Restricted Deformable Convolution (RDC) is proposed to enhance the transformation modeling capability of CNNs, so that the net can handle the images with large distortions. Second, in order to enrich surround view training data which are lacking, the zoom augmentation method is proposed to transform conventional images to fisheye images. Two complementary existing datasets are transformed using this method. Finally, an RDC based semantic segmenation model is trained for real-world surround view images through a multi-task learning architecture with the approaches of AdaBN and HLW. Experiments have shown that the RDC based network can effectively handle fisheye images. And the proposed solution was successfully implemented for road scene semantic segmentation using surround view cameras.

RDC has a good ability of modeling geometric transformations and is less prone to saturation. Deformable convolution shows a better ability of modeling geometric transformations if only applied to the last few convolutional layers. As future work, RDC and deformable convolution should be combined in one network to further enhance the CNNs’ transformation modeling ability.

References

  • [1] M. Siam, S. Elkerdawy, M. Jagersand, and S. Yogamani, “Deep semantic segmentation for automated driving: Taxonomy, roadmap and challenges,” arXiv preprint arXiv:1707.02432, 2017.
  • [2] C. Wang, H. Zhang, M. Yang, X. Wang, L. Ye, and C. Guo, “Automatic parking based on a bird’s eye view vision system,” Adv. Mech. Eng., vol. 6, p. 847406, 2014.
  • [3] R. Varga, A. Costea, H. Florea, I. Giosan, and S. Nedevschi, “Super-sensor for 360-degree environment perception: Point cloud segmentation using image features,” in IEEE Int. Conf. Intell. Transp. Syst., 2017, pp. 126–132.
  • [4] V. Fremont, M. T. Bui, D. Boukerroui, and P. Letort, “Vision-based people detection system for heavy machine applications,” Sensors, vol. 16, no. 1, p. 128, 2016.
  • [5] K. Miyamoto, “Fish eye lens,” JOSA, vol. 54, no. 8, pp. 1060–1061, 1964.
  • [6] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
  • [7] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in IEEE Conf. Comput. Vis. Pattern Recog., 2015, pp. 1–9.
  • [8] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conf. Comput. Vis. Pattern Recog., 2016, pp. 770–778.
  • [9] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid scene parsing network,” in IEEE Conf. Comput. Vis. Pattern Recog., July 2017, pp. 6230–6239.
  • [10] J. Dai, H. Qi, Y. Xiong, Y. Li, G. Zhang, H. Hu, and Y. Wei, “Deformable convolutional networks,” arXiv preprint arXiv:1703.06211, 2017.
  • [11] G. J. Brostow, J. Fauqueur, and R. Cipolla, “Semantic object classes in video: A high-definition ground truth database,” Pattern Recog. Lett., vol. 30, no. 2, pp. 88–97, 2009.
  • [12] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele, “The cityscapes dataset for semantic urban scene understanding,” in IEEE Conf. Comput. Vis. Pattern Recog., 2016, pp. 3213–3223.
  • [13] L. Deng, M. Yang, Y. Qian, C. Wang, and B. Wang, “CNN based semantic segmentation for urban traffic scenes using fisheye camera,” in IEEE Intell. Veh. Symp, 2017, pp. 231–236.
  • [14] J. Li, Y. Chen, L. Cai, I. Davidson, and S. Ji, “Dense transformer networks,” arXiv preprint arXiv:1705.08881, 2017.
  • [15] Y. Li, N. Wang, J. Shi, J. Liu, and X. Hou, “Revisiting batch normalization for practical domain adaptation,” arXiv preprint arXiv:1603.04779, 2016.
  • [16] T. Scharwächter and U. Franke, “Low-level fusion of color, texture and depth for robust road scene understanding,” in IEEE Intell. Veh. Symp, 2015, pp. 599–604.
  • [17] P. Sturgess, K. Alahari, L. Ladicky, and P. H. S. Torr, “Combining Appearance and Structure from Motion Features for Road Scene Understanding,” in Brit. Mach. Vis. Conf.   London, United Kingdom: BMVA, Sep. 2009.
  • [18] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein et al., “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis, vol. 115, no. 3, pp. 211–252, 2015.
  • [19] E. Shelhamer, J. Long, and T. Darrell, “Fully convolutional networks for semantic segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 4, pp. 640–651, April 2017.
  • [20] F. Yu and V. Koltun, “Multi-scale context aggregation by dilated convolutions,” arXiv preprint arXiv:1511.07122, 2015.
  • [21] L. C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs,” IEEE Trans. Pattern Anal. Mach. Intell., vol. PP, no. 99, pp. 1–1, 2017.
  • [22] P. Wang, P. Chen, Y. Yuan, D. Liu, Z. Huang, X. Hou, and G. Cottrell, “Understanding convolution for semantic segmentation,” arXiv preprint arXiv:1702.08502, 2017.
  • [23] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba, “Object detectors emerge in deep scene cnns,” arXiv preprint arXiv:1412.6856, 2014.
  • [24] G. Neuhold, T. Ollmann, S. R. Bulò, and P. Kontschieder, “The mapillary vistas dataset for semantic understanding of street scenes,” in IEEE Int. Conf. Comput. Vis., 2017.
  • [25] G. Ros, L. Sellart, J. Materzynska, D. Vazquez, and A. M. Lopez, “The synthia dataset: A large collection of synthetic images for semantic segmentation of urban scenes,” in IEEE Conf. Comput. Vis. Pattern Recog., 2016, pp. 3234–3243.
  • [26] A. Gaidon, Q. Wang, Y. Cabon, and E. Vig, “Virtual worlds as proxy for multi-object tracking analysis,” in IEEE Conf. Comput. Vis. Pattern Recog., 2016, pp. 4340–4349.
  • [27] S. R. Richter, V. Vineet, S. Roth, and V. Koltun, “Playing for data: Ground truth from computer games,” in Eur. Conf. Comput. Vis.   Springer, 2016, pp. 102–118.
  • [28] C. R. d. Souza, A. Gaidon, Y. Cabon, and A. M. López, “Procedural generation of videos to train deep action recognition networks,” in IEEE Conf. Comput. Vis. Pattern Recog., July 2017, pp. 2594–2604.
  • [29] E. Romera, J. M. Alvarez, L. M. Bergasa, and R. Arroyo, “ERFNet: Efficient Residual Factorized ConvNet for Real-Time Semantic Segmentation,” IEEE Trans. Intell. Transp. Syst., 2017.
  • [30] A. Paszke, A. Chaurasia, S. Kim, and E. Culurciello, “Enet: A deep neural network architecture for real-time semantic segmentation,” arXiv preprint arXiv:1606.02147, 2016.
  • [31] M. Treml, J. Arjona-Medina, T. Unterthiner, R. Durgesh, F. Friedmann, P. Schuberth, A. Mayr, M. Heusel, M. Hofmarcher, M. Widrich et al., “Speeding up semantic segmentation for autonomous driving,” NIPSW, vol. 1, no. 7, p. 8, 2016.
  • [32] J. Alvarez and L. Petersson, “Decomposeme: Simplifying convnets for end-to-end learning,” arXiv preprint arXiv:1606.05426, 2016.
  • [33] J. Kannala and S. S. Brandt, “A generic camera model and calibration method for conventional, wide-angle, and fish-eye lenses,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, no. 8, pp. 1335–1340, 2006.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
50634
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description