Propagating Confidences through CNNs for Sparse Data Regression

Propagating Confidences through CNNs for Sparse Data Regression

Abstract

In most computer vision applications, convolutional neural networks (CNNs) operate on dense image data generated by ordinary cameras. Designing CNNs for sparse and irregularly spaced input data is still an open problem with numerous applications in autonomous driving, robotics, and surveillance. To tackle this challenging problem, we introduce an algebraically-constrained convolution layer for CNNs with sparse input and demonstrate its capabilities for the scene depth completion task. We propose novel strategies for determining the confidence from the convolution operation and propagating it to consecutive layers. Furthermore, we propose an objective function that simultaneously minimizes the data error while maximizing the output confidence. Comprehensive experiments are performed on the KITTI depth benchmark and the results clearly demonstrate that the proposed approach achieves superior performance while requiring three times fewer parameters than the state-of-the-art methods. Moreover, our approach produces a continuous pixel-wise confidence map enabling information fusion, state inference, and decision support.

\addauthor

Abdelrahman Eldesokeyabdelrahman.eldesokey@liu.se1 \addauthorMichael Felsbergmichael.felsberg@liu.se1 \addauthorFahad Shahbaz Khanfahad.khan@liu.se1 \addinstitution Computer Vision Laboratory
Department of Electrical Engineering
Linköping University
Linköping, Sweden Propagating Confidences through Regression CNNs

1 Introduction

In recent years, machine learning methods have achieved significant successes in many computer vision applications, making use of data from monocular passive image sensors, such as grayscale, RGB, and thermal cameras. Typically, data generated by these image sensors are dense and most existing machine learning methods are designed to fully exploit this dense data in order to understand the scene content. Different to the aforementioned sensors, active sensors, such as LiDAR, RGB-D, and ToF cameras, produce sparse data. Here, the sparse output is caused by the acquisition process through active sensing compared to passively measuring light influx in conventional 2D sensors with dense output. The sparse output imposes additional challenges on the machine learning methods to infer the missing data and find an accurate reconstruction of the entire scene.

Sensors with sparse outputs are becoming increasingly popular and have numerous applications in autonomous driving, robotics, and surveillance due to their range measuring capability. One fundamental task is scene depth completion that aims to reconstruct a full depth map from sparse input. Scene depth completion is a required processing step in, e.g\bmvaOneDotsituation awareness and decision support. One of the key challenges when tackling the problem of scene depth completion is the handling of missing values while also differentiating them from the zero-valued regions. Besides, densifying the depth map, corresponding confidences are also desirable since they provide information about reliability of the output values. Such confidence maps are highly important for decision making in safety applications, e.g\bmvaOneDotobstacles detection in autonomous vehicles and robotics. Figure 1 shows an example of a depth completion task. Given the projected LiDAR point cloud, the objective is to densify the sparse depth map, either utilizing the RGB image (guided completion) or only using the projected point cloud (unguided completion). The output is a complete dense map together with pixel-wise output confidence.

(a) RGB image (b) LiDAR data* (c) Dense output (d) Output confidence
Figure 1: Depth map completion example. The RGB image (a) associated with a projected LiDAR point cloud in (b). The depth map completion output is shown in (c) together with pixel-wise confidence (d). Most existing deep learning methods struggle in scenarios as shown here due to the very high sparsity of the input data (95% of pixels are missing). [*The image is dilated for the sake of visibility]

Recently, deep learning, notably Convolutional Neural Networks (CNNs), have demonstrated great potential in solving a variety of computer vision tasks. Generally, CNNs are formed by several convolution, local normalization, and pooling layers. The final layers of CNNs are often fully connected (FC) where for classification problems, the last FC layer employs a softmax function to approximate the probability or confidence over the class memberships. Such confidences are often missing in regression settings, although both the regressed value and its confidence are required for numerous applications. For example, it is not only relevant to know how far away a potential obstacle is located, but also how reliable this information is. For the scene depth completion task, several deep regression networks have been proposed in the literature that introduce confidence measures [Ren et al.(2015)Ren, Xu, Yan, and Sun, Uhrig et al.(2017)Uhrig, Schneider, Schneider, Franke, Brox, and Geiger, Chodosh et al.(2018)Chodosh, Wang, and Lucey, Liu et al.(2018)Liu, Reda, Shih, Wang, Tao, and Catanzaro]. However, all these methods utilize the confidence as a binary valued mask to filter out the missing measurements. This strategy disregards valuable information available in the confidence maps. Different to these existing methods, we propose an approach that utilizes the signal confidence as a continuous measure for data uncertainty and propagates it through all layers.

In this paper, we propose an algebraically-constrained convolution operator for deep networks with sparse input to achieve a proper processing of confidences. The sparse input is equipped with confidences and the network is required to produce a dense output. We derive novel methods for determining the confidence from the convolution operation and propagating it to consecutive layers. To maintain the confidences within a valid range, we impose non-negativity constraints on the network weights during training. Further, we also introduce an objective function that simultaneously minimizes the data error while maximizing the output confidence. Moreover, we demonstrate the significance of the proposed confidence measure by introducing a novel approach for performing scale-fusion based on confidences. Our proposed method achieves state-of-the-art results on the KITTI depth benchmark [Uhrig et al.(2017)Uhrig, Schneider, Schneider, Franke, Brox, and Geiger] while requiring only 480 parameters, which is three times fewer than state-of-the-art methods.

2 Related Work

Scene depth completion is a challenging problem that aims to construct a dense depth map given a sparse depth image. It shares similarities with image inpainting since both tasks require filling missing information/pixels in an image. In case of image inpainting, several approaches based on deep learning have been introduced recently, however restricted to binary masks. These masks define regions in the image where missing pixels have zero values and the remaining pixels have ones. Köhler et al\bmvaOneDot[Köhler et al.(2014)Köhler, Schuler, Schölkopf, and Harmeling] showed quantitatively how incorporating those binary masks in training shallow networks leads to better results, even if the masks were not available during test time. Ren et al\bmvaOneDot[Ren et al.(2015)Ren, Xu, Yan, and Sun] proposed a convolution operation based on Shepard interpolation [Shepard(1968)] that also utilizes a binary mask to perform inpainting or super-resolution. They propagated the binary masks by convolving them with the same filters/weights as the data and thresholding insignificant values. Liu et al\bmvaOneDot[Liu et al.(2018)Liu, Reda, Shih, Wang, Tao, and Catanzaro] incorporated the use of binary masks in the U-Net architecture [Ronneberger et al.(2015)Ronneberger, Fischer, and Brox] for performing inpainting. The binary masks were propagated by setting the pixel at the filter origin to one if not all pixels within the filter support are unknown.

In case of scene depth completion, Uhrig et al\bmvaOneDotintroduced the KITTI depth benchmark [Uhrig et al.(2017)Uhrig, Schneider, Schneider, Franke, Brox, and Geiger] which is a large-scale dataset for this task. They also proposed a method, where the convolution operations are weighted using the binary masks. The masks are propagated using the max pooling operation. In their work, they also investigated concatenating the binary mask to the input as an additional channel. Chodosh et al\bmvaOneDot[Chodosh et al.(2018)Chodosh, Wang, and Lucey] utilized compressed sensing to approach the sparsity problem for scene depth completion. A binary mask is employed to filter out the unmeasured values. Further, their method requires significantly fewer parameters compared to [Uhrig et al.(2017)Uhrig, Schneider, Schneider, Franke, Brox, and Geiger]. Ma and Karaman [Ma and Karaman(2017)] proposed sparse-to-dense deep network which utilizes an RGB image and a randomly sampled set of sparse depth measurements to produce a dense depth map.

Our approach is different to the aforementioned methods in several aspects. Firstly, we treat the binary masks as continuous confidences instead of binary values. We further derive an algebraically-constrained deep convolution operator from the normalized convolution framework [Knutsson and Westin(1993)] that infers continuous output confidences. Secondly, different to [Ren et al.(2015)Ren, Xu, Yan, and Sun], we enforce the trained filters to be positive to obtain a sound confidence. This is also different from [Uhrig et al.(2017)Uhrig, Schneider, Schneider, Franke, Brox, and Geiger, Liu et al.(2018)Liu, Reda, Shih, Wang, Tao, and Catanzaro] that employed a constant averaging filter for the confidences, a strategy that assumes a uniform confidence distribution among all pixels, which is generally not the case in real-world data. Thirdly, we do not constrain the output confidences to be binary as in [Ren et al.(2015)Ren, Xu, Yan, and Sun, Uhrig et al.(2017)Uhrig, Schneider, Schneider, Franke, Brox, and Geiger, Liu et al.(2018)Liu, Reda, Shih, Wang, Tao, and Catanzaro]. Instead, we propose a computational scheme that allows output confidences to be continuous while propagating confidence information from the input to the output. Finally, we demonstrate that utilizing normalized convolution to perform scale-fusion in multi-scale networks based on confidences outperforms the standard convolution used in, e.g\bmvaOneDotin U-Net [Ronneberger et al.(2015)Ronneberger, Fischer, and Brox, Liu et al.(2018)Liu, Reda, Shih, Wang, Tao, and Catanzaro]. Moreover, our proposed approach requires remarkably fewer parameters compared to the aforementioned approaches, while achieving state-of-the-art results.

3 Our Approach

Here, we describe our approach by starting with a brief introduction to the normalized convolution framework. We then introduce an algebraically-constrained normalized convolution operator for CNNs and the propagation method for confidences. Finally, we describe our proposed network architecture and the loss function taking confidences into account.

3.1 Normalized Convolution

Assume a sparse signal/image with missing parts due to noise, acquisition process, preprocessing, or other system deficiencies. The missing parts of the signal are identified using a confidence mask , which has zeros or low values at missing/uncertain locations and ones otherwise. The signal is sampled and, at each sample point , the neighborhood is represented as a finite dimensional vector accompanied with a confidence vector of the same size, both assumed to be column vectors. Using the notation from [Farnebäck(2002)], normalized convolution is defined at all locations as (index is omitted to reduce clutter):

(1)

where is a matrix which incorporates a set of basis functions in its columns, denotes a diagonal matrix with vectorized on the diagonal, is the applicability function which is a non-negative localization function for the basis , and holds the coefficients of the signal at location projected onto the subspace spanned by .

The simplest case of normalized convolution assumes , and it becomes normalized averaging. In this case, the signal is mapped onto a constant, localized with the applicability function, and (1) simplifies to , which can be formulated for the full signal and its confidence as ( denotes convolution and point-wise multiplication):

(2)

3.2 Training the Applicability

The appropriate choice of the applicability function is an open issue as it usually depends on the nature of the data. Therefore, methods for statically estimating the applicability function have been suggested [Mühlich and Mester(2004)], but we aim to learn as part of the training. This generalizes convolutional layers, as normalized averaging is equivalent to standard convolution in case of signals with constant confidence. As described above, the applicability function acts as a confidence or localization function for the basis and therefore it is essentially non-negative.

Non-negative applicabilities are feasible to train in standard frameworks, since back-propagation is based on the chain rule and any differentiable function with non-negative co-domain can be plugged in. Thus, the function , e.g. the softplus, is applied to the the weights , and the gradients for the weight element at the convolution layer are calculated as:

(3)

where is the loss between the output and the ground truth, is the output of the layer at locations that were convolved with the weight element . Accordingly, the forward pass for normalized convolution is defined as:

(4)

where is the confidence from the previous layer, is the applicability in this context and is a constant to prevent division by zero. Note that this is formally a correlation, as it is a common notation in CNNs.

3.3 Propagating Confidence

The main strength about the signal/confidence philosophy is the availability of confidence information apart from the signal. This is needed to be propagated through the network to output a pixel-wise confidence aside with the network prediction. In normalized convolution frameworks, Westelius [Westelius(1995)] proposed a measure for propagating certainties:

(5)

where , and is the number of basis functions.

This measure calculates a geometric ratio between the Grammian matrix in case of partial confidence and in case of full confidence. Setting , i.e., , we can utilize the already-computed term in (4) to propagate the confidence as follows:

(6)

3.4 Loss Function

For scene depth completion task, we usually aim to minimize a norm, e.g. l1 or l2 norm, between the output from the network and the ground truth. In our proposed method, we use the Huber norm, which is a hybrid between the l1 and the l2 norm and it is defined as:

(7)

The Huber norm helps preventing exploding gradients in case of highly sparse data, which stabilizes the convergence of the network. Nonetheless, our aim is not only to minimize the error norm between the output and the groundtruth, but also to increase the confidence of the output data. Thus, we propose a new loss which has a data term and a confidence term:

(8)

where is the data output from the final layer , is the corresponding confidence output, is the ground truth, is the epoch number, and is the Huber norm. Note that the third term in the loss prevents the confidence from growing indefinitely. We weight the confidence term using the reciprocal of the epoch number to prevent it from dominating the loss function when the data error starts to converge.

Figure 2: Our proposed multi-scale architecture for the task of scene depth completion which utilizes normalized convolution layers. Downsampling is performed using max pooling on confidence maps and the indices of the pooled pixels are used to select the same pixels from the feature maps. Different scales are fused by upsampling the coarser scale and concatenate it with the finer scale. A normalized convolution layer is then used fuse the feature maps based on the confidence information. Finally, a 1 1 normalized convolution layer is used to merge different channels into one channel dense output and output confidence map.

3.5 Network Architecture

Inspired by [Ronneberger et al.(2015)Ronneberger, Fischer, and Brox], we propose a hierarchical multi-scale architecture that shares the same weights between different scales, which leads to a very compact network as shown in Figure 2. Downsampling is performed using max pooling on the confidences and similar to [Zeiler and Fergus(2014)] we keep the indices of the pooled pixels, which are then used to select the same pixels from the feature maps, i.e., we keep the most confident feature map pixels. The downsampled confidences are divided by the Jacobian of the scaling to maintain absolute confidence levels. Scale fusion is performed by upsampling the coarser scale and concatenate it with the finer scale. We apply a normalized convolution operator on the concatenated feature map to allow the network to fuse different scales utilizing confidence information.

4 Experiments

4.1 Experimental Setup

Dataset: We evaluate our method on the KITTI depth benchmark [Uhrig et al.(2017)Uhrig, Schneider, Schneider, Franke, Brox, and Geiger] which consists of projected LiDAR point clouds. The resulting depth maps/images are very sparse (approximately 4% of pixels have values). The benchmark has 86,000 training images, 7,000 validation images and 1,000 unannotated test images. The ground truth has missing parts as it was matched with the stereo disparity to remove projected LiDAR outliers. We evaluate on the full validation set as in [Chodosh et al.(2018)Chodosh, Wang, and Lucey] and the test set.

Implementation details: All our experiments are performed on a workstation with Intel Xeon CPU (4 cores), 8 GB of RAM and NVIDIA GTX 1080 GPU with 8 GB of memory. NConv-HMS, NConv-1Scale(4ch), and NConv-SF-STD are trained with a batch size of , while NCONV-1-Scale(16ch) are trained with a batch size of . Our networks were trained on the first 10,000 out of 86,000 depth maps/images in the training set. We use the ADAM solver with default parameters except for the learning rate which we set to 0.01.

Evaluation metrics: For comparison, we use the same evaluation metrics as defined in [Chodosh et al.(2018)Chodosh, Wang, and Lucey, Ma and Karaman(2017)]: Mean Absolute Error (MAE) which is an unbiased error metric, Root Mean Square Error (RMSE) which penalizes large errors, Mean Absolute Relative Error (MRE) is a ratio between the error magnitude and the groundtruth value, and Inliers Ratio () which is the percentage of pixels having relative error less than a specific threshold to the power of . As in [Chodosh et al.(2018)Chodosh, Wang, and Lucey], we use a challenging threshold value of .

MAE [m] RMSE [m] MRE #Params Output Conf.
CNN [Uhrig et al.(2017)Uhrig, Schneider, Schneider, Franke, Brox, and Geiger] 0.78 2.97 - - - - No
CNN+mask [Uhrig et al.(2017)Uhrig, Schneider, Schneider, Franke, Brox, and Geiger] 0.79 2.24 - - - - No
SparseConv [Uhrig et al.(2017)Uhrig, Schneider, Schneider, Franke, Brox, and Geiger] 0.58 1.80 0.035 0.33 0.65 0.82 No
Sparse-To-Dense [Ma and Karaman(2017)] 0.70 1.68 0.039 0.21 0.41 0.59 No
DCCS-1-Layer [Chodosh et al.(2018)Chodosh, Wang, and Lucey] 0.83 2.77 0.054 0.30 0.47 0.59 No
DCCS-2-Layers [Chodosh et al.(2018)Chodosh, Wang, and Lucey] 0.47 1.45 0.028 0.41 0.68 0.80 No
DCCS-3-Layers [Chodosh et al.(2018)Chodosh, Wang, and Lucey] 0.43 1.35 0.024 0.48 0.73 0.83 No
NConv-1-Scale(16ch) 0.40 1.58 0.022 0.60 0.81 0.88 Yes
NConv-1-Scale(4ch) 0.42 1.59 0.022 0.59 0.80 0.88 Yes
NConv-HMS 0.38 1.37 0.021 0.60 0.81 0.89 Yes
NConv-SF-STD 0.53 3.0 0.037 0.59 0.80 0.88 No
Table 1: Evaluation results on the validation set. The results for CNN and CNN+mask are taken from [Uhrig et al.(2017)Uhrig, Schneider, Schneider, Franke, Brox, and Geiger], SparseConv, Sparse-To-Dense and DCCS are from [Chodosh et al.(2018)Chodosh, Wang, and Lucey]. Our multi-scale architecture NConv-HMS outperforms all other method in all evaluation metrics except for RMSE, where it is slightly inferior to DCCS-3-Layers.

4.2 Quantitative Comparisons

We compare our method with state-of-the-art methods in the literature: Sparsity Invariant Convolution (SparseConv) [Uhrig et al.(2017)Uhrig, Schneider, Schneider, Franke, Brox, and Geiger], Deep Convolutional Compressed Sensing (DCCS) [Chodosh et al.(2018)Chodosh, Wang, and Lucey], and Sparse-To-Dense [Ma and Karaman(2017)] approaches. As mentioned earlier, the Sparsity Invariant Convolution method applies a constrained convolution operation using binary masks. The DCCS approach [Chodosh et al.(2018)Chodosh, Wang, and Lucey] employs compressed sensing and Alternating Direction Neural Networks (ADNNs) to create a deep auto-encoder that constructs a dense output. The Sparse-To-Dense method [Ma and Karaman(2017)] utilizes a ResNet architecture to encode the sparse LiDAR point clouds and RGB images and then decode a dense output

Impact of continuous confidences: To evaluate the impact of employing our proposed confidence scheme, we evaluate a single-scale architecture as described in [Uhrig et al.(2017)Uhrig, Schneider, Schneider, Franke, Brox, and Geiger]. This architecture consists of 6 normalized convolution layers with filter sizes of respectively with 16 channels each and we denote it as NConv-1-Scale(16ch). To further demonstrate the efficiency of our approach, we evaluate the same architecture with 4 channels only and we denote it as NConv-1-Scale(4ch). Table 1 shows the results for both experiments as well as other methods in comparison. Our single-scale architecture NConv-1-Scale(16ch) achieves superior results in terms of MAE, MRE and compared to all other methods. This demonstrates the advantage of our proposed confidences scheme compared to SparseConv [Uhrig et al.(2017)Uhrig, Schneider, Schneider, Franke, Brox, and Geiger]. Moreover, our compact architecture NConv-1-Scale(4ch) maintains the performance while requiring remarkably fewer parameters. However, DCCS-2-Layers and DCCS-3-Layers achieve better RMSE than our proposed single-scale architecture, which we attribute to the insufficient receptive field of the network.

Multi-scale architecture: To address the problem of the limited receptive field of our single-scale architecture, we incorporate a multi-scale architecture inspired by [Ronneberger et al.(2015)Ronneberger, Fischer, and Brox]. We further maintain the low number of parameters by sharing the weights/filters between different scales. The multi-scale architecture is illustrated in Figure 2 and denoted as NConv-HMS. Table 1 provides the comparison between NConv-HMS and existing methods. Our NConv-HMS achieves better results compared to the single-scale architectures with respect to all the evaluation metrics. The RMSE is the most significantly reduced measure and becomes almost the same as for DCCS-3-Layers. Note also that the number of parameters was reduced to 480, which is remarkably fewer than all other methods in comparison.

Impact of proposed scale-fusion scheme: A common approach to perform multi-scale fusion is to upsample the coarser scales, concatenate it with the finer scale and then use a convolution layer to learn the proper fusion as in [Ronneberger et al.(2015)Ronneberger, Fischer, and Brox, Liu et al.(2018)Liu, Reda, Shih, Wang, Tao, and Catanzaro]. Instead, we perform scale-fusion using a normalized convolution layer which takes into account the confidence information embedded in different scales. We evaluate both approaches in our multi-scale architecture and our confidence-based approach NConv-HMS significantly outperforms the standard fusion approach NConv-SF-STD as shown in Table 1. This clearly demonstrates the significance of utilizing confidence information for selecting the most confident data within the network.

Comparison on the test set: Here, we evaluate on the test set, which can only be performed on the benchmark server. Table 2 shows the error metrics for state-of-the-art methods published in the literature that are based on deep learning. SparseConv [Uhrig et al.(2017)Uhrig, Schneider, Schneider, Franke, Brox, and Geiger] performs significantly better on the test set than the validation set, while DCCS-3-Layers maintains its performance. NN+CNN corresponds to performing nearest-neighbor filling for missing pixels and then train a CNN with the same architecture as [Uhrig et al.(2017)Uhrig, Schneider, Schneider, Franke, Brox, and Geiger] to enhance the output. Our approach outperforms all published state-of-the-art methods on the test set. Contrary to the validation set, our approach outperforms DCCS-3-Layers on the test set.

SparseConv [Uhrig et al.(2017)Uhrig, Schneider, Schneider, Franke, Brox, and Geiger] NN+CNN [Uhrig et al.(2017)Uhrig, Schneider, Schneider, Franke, Brox, and Geiger] DCCS-3-Layers [Chodosh et al.(2018)Chodosh, Wang, and Lucey] NConv-HMS (Ours)
MAE [m] 0.48 0.41 0.44 0.37
RMSE [m] 1.60 1.41 1.32 1.29
Table 2: Quantitative results on the test set. All the results are taken from the online KITTI depth benchmark [Uhrig et al.(2017)Uhrig, Schneider, Schneider, Franke, Brox, and Geiger]. Our method outperforms all published methods on the benchmark.
Figure 3: Examples of scene depth completion using our mutli-scale architecture on the KITTI depth benchmark [Uhrig et al.(2017)Uhrig, Schneider, Schneider, Franke, Brox, and Geiger]. First row are the sparse projected LiDAR point clouds, second row are the ground truth images, third row are the dense outputs from our method, and the last row are the output confidence maps. Our method performs favorably on densifying the sparse input, while providing a confidence map that indicates the output reliability.

4.3 Qualitative Analysis

To further analyze the impact of the proposed contributions, we perform a qualitative study on the KITTI depth benchmark [Uhrig et al.(2017)Uhrig, Schneider, Schneider, Franke, Brox, and Geiger]. Figure 3 shows examples for performing depth scene completion on two images from the benchmark. The input are projected LiDAR point clouds that are highly sparse. The ground truth images are not completely dense due to the strict outlier filtering adopted by [Uhrig et al.(2017)Uhrig, Schneider, Schneider, Franke, Brox, and Geiger]. These missing data impose a big challenge on methods to learn a good representation. As shown in the figure, our multi-scale architecture performs very well on densifying the sparse input. Moreover, the output confidences from our method provide indication about how reliable the output depth maps are. At locations where neither input points nor groundtruth information is available, e.g. behind the cyclists or below the billboard, the output confidence is very low. Further, the results show that regions in the center of the scene tend to have high confidence due to the high point cloud density in the input. This demonstrates that our method for confidence propagation enables the network to learn the prominence of different regions with respect to the groundtruth.

Error analysis: As discussed earlier, our single-scale architecture suffers from a limited receptive field and fails to predict values for regions above the horizon in some images. This leads to a significant increase in the RMSE. We addressed this problem by adopting a multi-scale architecture to cover the whole receptive field. This allows our method to perform well on the whole validation set. For the case of the multi-scale architecture, the error is mainly distributed along sharp edges and upon the horizon. This is likely due to the absence of structural information that could be found in RGB images. Figure 4 shows an example of where the largest errors of our method are located. Obviously, those errors are distributed along the vehicles edges and close to the horizon. This problem could be addressed by incorporating prior knowledge about the structure of the scene from the RGB image.

Figure 4: An example of error analysis for our proposed method on KITTI Depth benchmark [Uhrig et al.(2017)Uhrig, Schneider, Schneider, Franke, Brox, and Geiger]. Top-left is the input RGB image, top-right is the projected LiDAR point cloud, bottom-left is the output from our method and bottom-right is the error map in logarithmic scale. The error is mainly distributed along edges and close to the horizon.

5 Conclusion

In this paper, we proposed an algebraically-constrained convolution layer for CNNs to tackle the issue of sparse and irregularly spaced input data. Unlike previous works, we treated the input masks as continuous confidences instead of binary values and equip the sparse input with confidences. We further derived novel methods for determining the confidence from the convolution operation and propagating it to consecutive layers. A non-negativity constraints on the network weights is imposed to maintain the confidences within a valid range. Moreover, we introduced an objective function that simultaneously minimizes the data error while maximizing the output confidence. Comprehensive experiments are performed on the KITTI depth benchmark for scene depth completion. The results show that our approach achieves superior performance while requiring significantly fewer parameters. Finally, the continuous pixel-wise confidence map produced by our approach is shown to produce reasonable results enabling proper information fusion, state inference, and decision support.

6 Acknowledgments

This research is funded by Vinnova through grant CYCLA and the Swedish Research Council through a framework grant for the project Energy Minimization for Computational Cameras (2014-6227).

References

  • [Chodosh et al.(2018)Chodosh, Wang, and Lucey] Nathaniel Chodosh, Chaoyang Wang, and Simon Lucey. Deep Convolutional Compressed Sensing for LiDAR Depth Completion. mar 2018. URL http://arxiv.org/abs/1803.08949.
  • [Farnebäck(2002)] Gunnar Farnebäck. Polynomial expansion for orientation and motion estimation. PhD thesis, Linköping University Electronic Press, 2002.
  • [Knutsson and Westin(1993)] Hans Knutsson and C-F Westin. Normalized and differential convolution. In Computer Vision and Pattern Recognition, 1993. Proceedings CVPR’93., 1993 IEEE Computer Society Conference on, pages 515–523. IEEE, 1993.
  • [Köhler et al.(2014)Köhler, Schuler, Schölkopf, and Harmeling] Rolf Köhler, Christian Schuler, Bernhard Schölkopf, and Stefan Harmeling. Mask-specific inpainting with deep neural networks. In German Conference on Pattern Recognition, pages 523–534. Springer, 2014.
  • [Liu et al.(2018)Liu, Reda, Shih, Wang, Tao, and Catanzaro] Guilin Liu, Fitsum A. Reda, Kevin J. Shih, Ting-Chun Wang, Andrew Tao, and Bryan Catanzaro. Image Inpainting for Irregular Holes Using Partial Convolutions. apr 2018. URL http://arxiv.org/abs/1804.07723.
  • [Ma and Karaman(2017)] Fangchang Ma and Sertac Karaman. Sparse-to-dense: Depth prediction from sparse depth samples and a single image. arXiv preprint arXiv:1709.07492, 2017.
  • [Mühlich and Mester(2004)] Matthias Mühlich and Rudol Mester. A statistic al extension of normalized convolution and its usage for image interpolation and filtering. In EUSIPCO, 2004.
  • [Ren et al.(2015)Ren, Xu, Yan, and Sun] Jimmy SJ Ren, Li Xu, Qiong Yan, and Wenxiu Sun. Shepard convolutional neural networks. In Advances in Neural Information Processing Systems, pages 901–909, 2015.
  • [Ronneberger et al.(2015)Ronneberger, Fischer, and Brox] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-Net: Convolutional Networks for Biomedical Image Segmentation. pages 234–241. Springer, Cham, oct 2015. doi: 10.1007/978-3-319-24574-4_28. URL http://link.springer.com/10.1007/978-3-319-24574-4{_}28.
  • [Shepard(1968)] Donald Shepard. A two-dimensional interpolation function for irregularly-spaced data. In Proceedings of the 1968 23rd ACM national conference, pages 517–524. ACM, 1968.
  • [Uhrig et al.(2017)Uhrig, Schneider, Schneider, Franke, Brox, and Geiger] Jonas Uhrig, Nick Schneider, Lukas Schneider, Uwe Franke, Thomas Brox, and Andreas Geiger. Sparsity Invariant CNNs. aug 2017. URL http://arxiv.org/abs/1708.06500.
  • [Westelius(1995)] Carl-Johan Westelius. Focus of attention and gaze control for robot vision. PhD thesis, Linköping University, Computer Vision, The Institute of Technology, 1995.
  • [Zeiler and Fergus(2014)] Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In European conference on computer vision, pages 818–833. Springer, 2014.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
200382
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description