AMNet: Deep Atrous Multiscale
Stereo Disparity Estimation Networks
In this paper, a new deep learning architecture for stereo disparity estimation is proposed. The proposed atrous multiscale network (AMNet) adopts an efficient feature extractor with depthwise-separable convolutions and an extended cost volume that deploys novel stereo matching costs on the deep features. A stacked atrous multiscale network is proposed to aggregate rich multiscale contextual information from the cost volume which allows for estimating the disparity with high accuracy at multiple scales. AMNet can be further modified to be a foreground-background aware network, FBA-AMNet, which is capable of discriminating between the foreground and the background objects in the scene at multiple scales. An iterative multitask learning method is proposed to train FBA-AMNet end-to-end. The proposed disparity estimation networks, AMNet and FBA-AMNet, show accurate disparity estimates and advance the state of the art on the challenging Middlebury, KITTI 2012, KITTI 2015, and Sceneflow stereo disparity estimation benchmarks.
Depth estimation is a fundamental computer vision problem aiming to predict a measure of distance of each point in a captured scene. Accurate depth estimation has many applications such as scene understanding, autonomous driving, computational photography, and improving the aesthetic quality of images by synthesizing the Bokeh effect. Given a rectified stereo image pair, depth estimation can be done by disparity estimation with camera calibration. For each pixel in one image, disparity estimation finds the shifts between one pixel and its corresponding pixel in the other image on the same horizontal line so that the two pixels are the projections of a same 3D position.
Disparity estimation based on a stereo image pair is a well known problem in computer vision. Often the stereo images are first rectified to lie in the same image plane and such that corresponding pixels in the left and right lie on the same horizontal line. Disparity estimation pipelines classically consist of three or fours steps; feature extraction, matching cost computation, disparity aggregation and computation, and an optional disparity refinement step . Calculation of the matching cost at a given disparity is based on evaluating a function that measures the similarity between pixels in the left and right images with this disparity shift, which can simply be the sum of absolute differences of pixel intensities at the given disparity . Calculation of the matching costs on pixel intensities directly is prone to errors due to practical variations such as illumination differences, inconsistencies, environmental factors such as rain and snow flares, and occlusions. Hence, the robustness of traditional stereo matching methods can be improved by first extracting features from the intensities such as local binary patterns  and local dense encoding . Disparity aggregation can be done by simple aggregation of the calculated cost over local box windows, or by guided-image cost volume filtering . Disparity calculation can be done by local, global or semiglobal methods. Semiglobal matching (SGM)  is considered the most popular method, as it is more robust than local window-based methods and performs cost aggregation by approximate minimization of a two dimensional energy function towards each pixel along eight one dimensional paths. SGM is less complex than global methods such graph cuts (GC) that minimize the two dimensional energy function with a full two dimensional connectivity for the smoothness term . Traditionally, disparity refinement is done by further checking for left and right consistencies, invalidating occlusions and mismatches, and filling such invalid segments by propagating neighboring disparity values.
Recently, there has been significant efforts in collecting datasets with stereo input images and their ground truth disparity maps, e.g. SceneFlow , KITTI 2012 , KITTI 2015 , and the Middlebury  stereo benchmark datasets. The existence of such datasets enabled supervised training of deep neural networks for the task of stereo matching, as well as the transparent testing and benchmarking of different algorithms on their hosting servers. Convolutional neural networks (CNN) have become ubiquitous in addressing image processing and computer vision problems. CNN-based disparity estimation systems take their cues from the classical ones, and constitute of different modules that attempt to perform the same four tasks of feature extraction, matching cost estimation, disparity aggregation and computation, and disparity refinement. First, deep features are extracted from the rectified left and right images using deep convolutional networks such as ResNet-50  or VGG-16 . The cost volume (CV) is formed by measuring the matching cost between the extracted left and right deep feature maps. Typical choices for the matching cost can be by simple feature concatenation or by calculation of metrics such as absolute distance or correlation [8, 14, 15, 16]. The CV is further processed and refined by a disparity computation module that regresses to the estimated disparity. Refinement networks can then be used to further refine the initial coarse depth or disparity estimates.
In this work, we propose a novel deep neural network architecture for stereo disparity estimation, the atrous multiscale network (AMNet). The proposed network architecture is shown in Fig. 2. We design our feature extractor by first modifying the standard ResNet-50 backbone to a depthwise separable ResNet (D-ResNet) which makes it feasible to design the network with a larger receptive field without increasing the number of trainable parameters. Second, we propose an atrous multiscale (AM) module, which is designed as a scene understanding module that captures deep global contextual information at multiple scales as well as local details. Our proposed feature extractor constitutes of the D-ResNet followed by the AM module. For cost matching computation, we design a new extended cost volume (ECV) that simultaneously computes different cost matching metrics and constitutes of several cost sub-volumes; a disparity-shifted feature concatenation sub-volume, a disparity-level depthwise correlation sub-volume, a disparity-level feature distance sub-volume. The ECV carries rich information about the matching costs from the different similarity measures. For disparity computation and aggregation, the ECV is processed by a designed stacked AM module which stacks multiple AM modules for multiscale context aggregation. Disparity optimization is done by regression after the soft classification of the quantized disparity bins. To enhance the cost volume filtering, and improve the disparity computation and optimization steps, we also propose to learn the foreground-background segmentation as an auxiliary task. The learned foreground background information reinforces disparity estimation similar to image-guided cost-volume filtering. Hence, we train AMNet using multitask learning, in which the main task is disparity estimation and the auxiliary task is foreground-background segmentation. We name the multitask network as foreground-background-aware AMNet (FBA-AMNet). The auxiliary task helps the network have better foreground-background awareness so as to further improve disparity estimation. As discussed above, the optional step of disparity refinement can further improve the estimated disparity. However, in this work, no refinement has been applied on the estimated disparity.
The proposed AMNets ranked first among all published results on the three most popular disparity estimation benchmarks: KITTI stereo 2015 , KITTI stereo 2012 , and Sceneflow  stereo disparity tests. Some examples showing the superiority of our proposed atrous multiscale stereo disparity estimation networks are shown in Fig. 3 and Fig. 4.
The rest of this paper is organized as follows: In the next section, we give more details about previous and related research works. In Sec. III, detailed descriptions of the proposed AMNet are given. Details about FBA-AMNet are given in Sec IV. In Sec. V, numerical and visualization comparisons with the state-of-art methods on standard benchmark tests are given. Sec. VI concludes this paper.
Ii Related Works
There has been significant interest to improve the extraction of contextual information using deep neural networks for better image understanding. The earlier methods used multiscale inputs from an image pyramid     or implemented probabilistic graphical models  . Recently, models with spatial pyramid pooling (SPP)  and encoder-decoder structure have shown great improvements in various computer vision tasks. Zhao et al.  proposed the PSPNet which performs SPP at different grid scales. Chen et al.   applied atrous convolutions to the SPP module (ASPP) to process the feature maps using several parallel atrous convolutions with different dilation factors. Newell et al.  designed a stacked hourglass module which stacks an encoder-decoder module three times with shortcut connections to aggregate multiscale contextual information. Chen et al.  further developed the DeepLab v3+ model that combined the ideas of encoder-decoder architecture and ASPP.
Disparity estimation based on a stereo image pair is a well known problem in computer vision. CNN based systems have recently become ubiquitous in solving this problem. In the early work, Zbontar et al.  proposed a Siamese network to match pairs of image patches for disparity estimation. The network consists of a set of shared convolutional layers, a feature concatenation layer, and a set of fully connected layers for second stage processing and similarity estimation. Luo et al.  developed a faster Siamese network in which cost volume is formed by computing the inner product between the left and the right feature maps and the disparity estimation is forumalated as a multi-label classification.
End-to-end neural networks have also been proposed for stereo disparity estimation. Mayer et al.   DispNet which consists of a set of convolution layers for feature extraction, a cost volume formed by feature concatenation or patch-wise correlation, an encoder-decoder structure for second stage processing, and a classification layer for disparity estimation. Motivated by the success of deep neural networks, Kendall et al.  proposed GC-Net. GC-Net uses a deep residual network  as the feature extractor, a cost volume formed by disparity-level feature concatenation to incorporates contextual information, a set of D convolutions and D deconvolutions for second stage processing, and a soft argmin operation for disparity regression. To further explore the importance of contextual information, Chang and Chen  proposed the pyramid stereo matching network (PSMNet). Before constructing the cost volume, PSMNet learns the contextual information from the extracted features through a spatial pyramid pooling module. For disparity computation, PSMNet processes the cost volume using a stacked hourglass CNN which constitutes of three hourglass CNNs. Each hourglass CNN has an encoder-decoder architecture, where the encoder and decoder parts of each hourglass network involve downsampling and upsampling of feature maps, respectively.
Fusion of semantic segmentation information with other extracted information can result in better scene understanding, and hence has been shown effective in improving the accuracy of challenging computer vision tasks, such as multiscale pedestrian detection . Consequently, researchers tried to utilize information from low-level vision tasks such as semantic segmentation or edge detection to reinforce the disparity estimation system. Yang et al.  introduce the SegStereo model, which suggests that appropriate incorporation of semantic cues can rectify disparity estimation. The SegStereo model embeds semantic features to enhance intermediate features and regularize the loss term. Song et al.  proposed EdgeStereo where edge features are embedded and cooperated by concatenating them to features at different scales of the residual pyramid network, and trained using multiphase training.
Some works have been dedicated to design disparity refinement networks to improve the depth or disparity estimated from previous state-of-art methods. Fergus et al. , designed a coarse-to-fine depth refinement module that improved the accuracy of the depth estimated by a single-image depth estimation network. Recently, a refinement module called a convolutional spatial propagation network (CSPN) was proposed, and was trained to refine the output from existing state-of-art networks for single image depth estimation  or stereo disparity estimation , which improved their accuracies . A recent work, DispSegNet  concatenated semantic segmentation embeddings with the initial disparity estimates before passing them to the second stage refinement network which improved the disparity estimation in ill-posed regions.
Iii Atrous Multiscale Network
In this section, we describe each component of the proposed stereo disparity estimation network. The network architecture of the proposed AMNet is shown in Fig. 2.
Iii-a Depth Separable ResNet for Feature Extraction
We propose an efficient feature extractor using depth separable convolutions with residual connections. A standard convolution can be decomposed into a depthwise separable convolutions followed by a convolution. Depth separable convolutions have recently shown great potential in image classification , and has been further developed for other computer vision tasks as a network backbone   . Depth separable residual networks have also been proposed for image enhancement tasks such as image denoising .
Inspired by these works, we design the D-ResNet, as the feature extraction backbone, by replacing standard convolutions with customized depthwise separable convolutions. Our approach differs from previous approaches whose goal is to reduce the complexity. Instead, we use depth separable convolutions to increase the residual network’s learning capacity while keeping the number of trainable parameters the same. A depthwise separable convolution replaces the three dimensional convolution with two dimensional convolutions done separately on each input channel (depthwise), followed by a pointwise convolution that combines the output of the separate convolutions into an output feature map. Let and represent the number of the input and output feature maps at a convolutional layer, respectively. A standard convolutional layer contains parameters, while a depthwise separable convolutional layer contains parameters, which is much smaller for typical choices of and .
We modified the 50-layer residual network (ResNet) proposed in PSMNet  as a feature extractor, which constitutes of 4 groups of residual blocks, where each residual block consitutes of two convolutional layers with convolutional kernels. The number of residual blocks in the 4 groups are respectively. In PSMNet’s ResNet, the number of output feature maps are for the four residual groups, respectively, where for all the residual blocks. Since is or larger, a direct replacement of the standard convolutions with a depthwise separable convolution will result in a model with much less number of parameters. However, in our proposed depth-separable ResNet (D-ResNet), we increase for the depthwise separable convolutional layers in four residual blocks to be , respectively, where for the first block, so as to make the number of parameters in the proposed D-ResNet close to that of PSMNet. Thus, the proposed D-ResNet can learn more deep features than ResNet while having a similar complexity. Since our proposed depth-separable residual blocks do not necessary have the same number of input and output features, we deploy pointwise projection filters on the shortcut (residual) connection to project the input features onto the features. Fig. 5 shows a comparison between a standard ResNet block and the proposed D-ResNet block. ReLU and Batch Normalization are used after each layer. After the D-ResNet backbone, the widths and heights of the output feature maps are th of those of the input image. The network specifications of the D-ResNet backbone are listed in Table I.
Iii-B Atrous Multiscale Context Aggregation
Since the accuracy of disparity estimation relies on the ability to identify key features at multiple scales, we consider aggregating multiscale contextual information from the deep features. Depth or disparity estimation networks tend to use down-samplings and up-samplings or encoder-decoder architectures, also called hour glass architectures [19, 15, 22] to aggregate information at multiple scales. The spatial resolution tends to be lost by pooling or downsampling, We use an AM module after the D-ResNet backbone to form the feature extractor. The deep features extracted by the D-ResNet from the stereo image pair are processed by the proposed atrous multiscale (AM) modules before using them to calculate the disparity, as shown in Fig. 2 . Atrous (also called dilated) convolutions provide denser features than earlier methods such as pooling, feature scaling. Inspired by the context network  and the hourglass module  , we design an AM module as a set of atrous convolutions with increasing dilation factors such as . The dilation factors increase as the AM module goes deeper to increase the receptive field and capture denser multiscale contextual information without losing the spatial resolution. Two convolutions with dilation factor one are added at the end for feature refinement and feature size adjusting.
Iii-C Extended Cost Volume Aggregation
We proposed an extended cost volume (ECV) which combines different methods for disparity cost metrics to diversify the information extracted about the true disparity. The cost volume takes as input the deep features extracted by the D-ResNet from the left image and the right image, which are labeled as and , respectively. The ECV constitutes of three concatenated cost volumes: disparity-level feature distance, disparity-level depthwise correlation, and disparity-level feature concatenation. Let be the maximum disparity the AMNet is designed to predict, then let the possible integer disparity shifts be . The three constructed cost volumes, that are concatenated to form the ECV, are described below.
Disparity-level feature concatenation: Let refer to the right deep features when shifted pixels to the right to align with , together with the necessary trimming and zero-padding for to form , for . The left feature maps and the disparity-aligned right feature maps are concatenated for all disparity levels . Let , respectively be the width, height, and depth of the feature maps and . Then, the size of this cost volume is .
Disparity-level feature distance: The point-wise absolute difference between and is computed at all disparity levels . All the distance maps are packed together to form a sub-volume of size .
Unlike , instead of computing correlations between with all other patches centered at values within a neighborhood of size of (expand along the horizontal line), we compute correlations between and its corresponding patches in the aligned across all disparity levels (expand along the disparity level). This results in a sub-volume of size . To make the size of the output feature map comparable to other sub-volumes, we implement depthwise correlation. At each disparity level, the depthwise correlations of two aligned patches are computed and packed across all depth channels for depth indices , by Eq. 2 and Eq. 3.
The depthwise correlation is computed for all patches across all disparity levels, and concatenated to form a cost volume of size .
The final ECV has a size of . To aggregate the ECV information with more coarse-to-fine contextual information, we propose cascading three AM modules with shortcut connects within to form the stacked AM module (SAM). The architectures of the proposed AM module and SAM module are shown in Fig. 6. Note that due to the introduction of the disparity dimension by construction of the ECV, the stacked AM module is implemented with D convolutions to process the ECV.
Iii-D Disparity Optimization
The smooth loss is used to measure the difference between the predicted disparity and the ground-truth disparity , at the th pixel. The loss is computed as the average smooth loss over all labeled pixels. During training, three losses are computed separately at the outputs of the three AM modules in the SAM module and summed up to form the final loss, as shown in Eq. (4) and Eq. (5):
where is the total number of labeled pixels. During testing, only the output from the final AM module is used for disparity regression.
At each output layer, the predicted disparity is calculated using the soft argmin operation  for disparity regression. At each pixel, a classification probability is found for each disparity value in , and the expectation of the disparities is computed as the disparity prediction, as shown in Eq. 6:
where is softmax probability of disparity at pixel and is the maximum disparity value.
Iv Foreground-background Aware Atrous Multiscale Network
Given the fact that disparities change drastically at the locations where foreground objects appear, we conjecture that a better awareness of foreground objects will lead to a better disparity estimation. In outdoor driving scenes such as KITTI, we define foreground objects as vehicles and humans. In this work, we utilize foreground-background segmentation map to improve disparity estimation. We only differentiate differentiate between foreground and background pixels.
We considered different methods to utilize the foreground-background segmentation information: The first method is to directly feed the extra foreground-background segmentation information as an additional input besides the RGB image (RGB-S input) to guide the network. This requires accurate segmentation maps in both the training and testing stages. The second method is to train the network as a multitask network. The multitask network is designed to have a shared base and different heads for the two tasks. By optimizing the multitask network towards both tasks, the shared base is trained to have better awareness of foreground objects implicitly, which leads to better disparity estimation. This is the adopted method since it improves the discrimination capability of the main branch by trying to learn the auxiliary task of FBA, and does not require a standalone segmentation network, which can be quite complex. The network structure of the proposed FBA-AMNet is shown in Fig. 7. All layers in the feature extractor are shared between the main task of disparity estimation and the auxiliary task of foreground-background segmentation. Beyond the feature extractor, a binary classification layer, an up-sampling layer, and a softmax layer are added for foreground-background segmentation.
The network is trained end-to-end using multitask learning where the loss function is a weighted combination of the losses due to the disparity error and the foreground-background classification error given by , such that is the relative weight for the segmentation loss. We propose an iterative method to train FBA-AMNet. After each epoch, the latest estimated segmentation maps are concatenated with the RGB input to form an RGB-S input to the FBA-AMNet at the next epoch. During training, the network keeps refining and utilizing its foreground-background segmentation predictions so as to learn better awareness of foreground objects. At the inference stage, the segmentation task is ignored and we use zero maps as the extra input.
Different from previous works which tried to utilize semantic segmentation [20, 17], the proposed foreground-background aware (FBA) network does not differentiate between the different classes of foreground objects or different background classes. We show in our ablation study that this foreground-background awareness gives more accurate disparity estimates than using full semantic segmentation. One reasoning is that foreground-background segmentation can be learned more accurately than full semantic segmentation as it is an easier task to learn, which allows the network optimization to focus more on the main task of disparity estimation.
In this section, we provide numerical and visualization results on public challenges and datasets.
V-a Datasets and evaluation metrics
KITTI stereo 2015: The KITTI benchmark provides images in size captured by a pair of stereo camera in real-world driving scenes. KITTI stereo 2015  consists of 200 training stereo image pairs and 200 test stereo image pairs. Sparse ground-truth disparity maps are provided with the training data. D1-all error is used as the main evaluation metric which computes the percentage of pixels for which the estimation error is 3px and 5% of its ground-truth disparity.
KITTI stereo 2012: KITTI stereo 2012  consists of 194 training stereo image pairs and 195 test stereo image pairs. Out-Noc error is used as the main evaluation metric which computes the percentage of pixels for which the estimation error is 3px for all non-occluded pixels.
Sceneflow: The Sceneflow benchmark  is a synthetic dataset suite that contains above 39000 stereo image pairs in size rendered from various synthetic sequence. Three subsets contain around 35000 stereo image pairs are used for training (Flyingthings3D training, Monkka, and Driving) and one subset contains around 4000 stereo image pairs is used for testing (Flyingthings3D test). Sceneflow provides complete ground-truth disparity maps for all images. The end-point-error (EPE) is used as the evaluation metric.
Middlebury: The Middlebury stereo benchmark  consists of a training set and a test set with 15 image pairs each in three resolutions, full (F), half (H), and quarter (Q). Ground-truth disparities are provided for the 15 training images. 10 evaluation metrics are used such as the 99-percent error quantile in pixels (A99) and root-mean-square disparity error in pixels (RMS).
|All pixels||Non-Occluded pixels|
|GC-Net ||2.21%||6.16%||2.87%||2.02%||5.58%||2.61%||0.9 s|
|PDSNet ||2.29%||4.05%||2.58%||2.09%||3.68%||2.36%||0.5 s|
|PSMNet ||1.86%||4.62%||2.32%||1.71%||4.31%||2.14%||0.41 s|
|SegStereo ||1.88%||4.07%||2.25%||1.72%||3.41%||2.00%||0.7 s|
|EdgeStereo ||1.87%||3.61%||2.16%||2.12%||3.85%||2.40%||0.27 s|
|MC-CSPN ||1.56%||3.78%||1.93%||2.12%||3.85%||2.40%||0.9 s|
V-B Implementation details
We first train an AMNet-8 and an AMnet-32 from scratch on the Sceneflow training set . For the two models, the dilation factors of the atrous convolutional layers in the AM module are set to and , respectively. The maximum disparity is set to . The parameter in the ECV is set to . The weight for the segmentation loss is set to . For a pair of input images, two patches in size at a same random location are cropped as inputs to the network. All pixels with a ground-truth disparity larger than are excluded from training. The model is trained end-to-end with a batch size of for epochs with the Adam optimizer. The learning rate is set to initially and is decreased to after epochs. All the models are implemented with PyTorch and trained on NVIDIA GPUs.
We fine-tune four models: an AMNet-8, an AMNet-32, a FBA-AMNet-8, and a FBA-AMNet-32 on KITTI from our pre-trained Sceneflow AMNet-8 and AMNet-32 models. The FBA-AMNet models are trained using the iterative training method described in Sec. IV with a batch size of for epochs with the Adam optimizer. The learning rate is set to initially and is decreased to after epochs. We increase the learning rate to times larger for the new layers. Other settings are the same when training on the Sceneflow test set . The foreground-background segmentation maps are initialized as zeros for the first epoch.
We only trained FBA-AMNet on the KITTI benchmark datasets. Due to the fact that segmentation labels in the Sceneflow test set  are not consistent across scenes or objects, and they are lacking in the Middlebury set, we don’t train the FBA-AMNet on the Sceneflow or on the Middlebury datasets, where we only trained AMNet. For the Middlebury benchmark , we fine-tune our pre-trained AMNet-32 model on the Middlebury training images in quarter resolution, using the same experiment settings as with the KITTI AMNet-32 model.
|PDSNet ||1.92%||2.53%||0.9 px||1.0 px|
|GC-Net ||1.77%||2.30%||0.6 px||0.7 px|
|EdgeStereo ||1.73%||2.18%||0.5 px||0.6 px|
|SegStereo ||1.68%||2.03%||0.5 px||0.6 px|
|PSMNet ||1.49%||1.89%||0.5 px||0.6 px|
|AMNet-8||1.38%||1.79%||0.5 px||0.5 px|
|AMNet-32||1.33%||1.74%||0.5 px||0.5 px|
|FBA-AMNet-8||1.36%||1.76%||0.5 px||0.5 px|
|FBA-AMNet-32||1.32%||1.73%||0.5 px||0.5 px|
V-C Experimental results
Performance on the KITTI stereo 2015 test set: We submitted our estimated disparity maps to the KITTI server to evaluate our four models, AMNet-8, AMNet-32, FBA-AMNet-8, and FBA-AMNet-32, on the KITTI stereo 2015 test set  and compare it with all published methods on all evaluation settings. The results are shown in Table II. All our four models perform better than published state-of-art methods on D1-all with significant margins. The FBA-AMNet-32 model lowers the D1-all error on all pixels to , compared to EdgeStereo which is the previous best result with an end-to-end network whose disparity maps have more errors then FBA-AMNet-32. Our end-to-end FBA-AMNet is also better than two stage solutions like MC-CSPN  which added a depth refinement head on top of PSMNet  to improve its performance. Visualization of the disparity maps and comparisons with the state-of-art methods on two challenging scenes from the KITTI test set can be observed in Fig. 3 and Fig. 8. The D1-all error for all pixels is computed for each method, and demonstrates that the proposed FBA-AMNet has the least percentage of pixels with erroneous disparity estimates.
Performance on the KITTI stereo 2012 test set: Performance comparisons on the KITTI stereo 2012 test set  are shown in Table III. Being consistent with KITTI stereo 2015, our four models significantly outperform all other published methods on all evaluation settings. The FBA-AMNet-32 model decreases the Out-Noc to , with a relative gain of compared to the previous best result reported at . Note that only results with an error threshold of are reported here, and are consistent with the results for other error thresholds as well. Disparity map visualizations with FBA-AMNet-32, PSMNet, and SegStereo on two challenging KITTI stereo 2012 test images are shown in Fig. 9. The Out-Noc error is computed for each method and confirms the superiority of FBA-AMNet.
Performance on the Sceneflow test set: We compare the AMNet-8 model and the AMNet-32 model with all published methods on the Sceneflow test set . Both of our models outperform other methods with large margins. Results reported in EPE are shown in Table IV. Our AMNet-32 model pushes EPE to , with a relative gain of compared to the previous best result at . Visualizations of the disparity maps generated by AMNet-32 and PSMNet on two Sceneflow test images are shown in Fig. 4, where the EPE is computed for each method.
Performance on the Middlebury test set: Performance comparisons on Middlebury  are shown in Table V. AMNet-32 achieves 106 on the A99 test dense metric, which ranks first among all submissions using quarter resolution images, and fourth among all published submissions.
V-D Ablation Study
In this subsection, we analyze the effectiveness of each component of the proposed architecture in details. We conduct most of the analysis on the Sceneflow test set , since KITTI only allows a limited number of evaluations on the test set.
V-D1 AMNet versus FBA-AMNet on foreground pixels
Compared to the AMNet, the FBA-AMNet is designed and trained to generate smoother and more accurate shapes for foreground objects, which leads to finer disparity maps. We visualize the disparity estimation results of the AMNet-32 model and the FBA-AMNet-32 model on two challenging foreground objects from KITTI test images in Fig. 1. The visualizations support the fact that the FBA-AMNet is able to generate finer boundary details for the foreground objects.
V-D2 D-ResNet versus ResNet-50 as network backbone
We explore how the modifications to the network backbone from a ResNet-50 to our proposed D-ResNet change performance and complexity. We compare three models: the AMNet-32 model using PSMNet’s ResNet-50  as the network backbone, the AMNet-32 model after modifying the ResNet-50 by directly replacing the standard convolutions with depthwise separable convolutions, and our proposed D-ResNet specified in Table I. The results on the Sceneflow test set  and the number of parameters of each model are shown in Table VI. We can see that D-ResNet performs better then the reference ResNet-50 and with less parameters.
|ResNet-50 (sep conv)||0.81||1.72 million|
V-D3 Ablation study for the extended cost volume
We perform an ablation study for the extended with seven models modified from the AMNet-32 model by using different combinations of the three constituent volumes of the ECV introduced in Sec. III-C. Comparisons of the results on the Sceneflow test  set are shown in Table VII. The results show that the disparity-level feature distance volume is more effective than the other two, and a combination of the three volumes to form the ECV leads to the best performance.
|Cost volume||EPE||Feature size|
|Dist. + Corr.||0.78||HW(D+1)2C|
|Dist. + FC||0.76||HW(D+1)3C|
|Corr. + FC||0.8||HW(D+1)3C|
V-D4 Going deeper with AM module
Table VIII shows how different network architectures of the AM module affect the performance and speed of the AMNet model by setting its maximum dilation factor to , , , and . We confirm that a deeper structure allows the AM module to aggregate more multiscale contextual information and leads to a finer feature representation and more accurate disparity estimation, at the expense of extra computational cost.
V-D5 Performance visualizations of the foreground-background segmentation task
Figure 10 shows one image from the KITTI stereo 2015 test set and the coarse-to-fine foreground-background segmentation results generated by FBA-AMNet-32 models at training epoch , , , and . The visualizations show that during the training process, the multitask network gradually learns better awareness of foreground objects. This shows how the network can learn the auxiliary task of foreground-background segmentation, while focusing more on learning the main task of disparity estimation.
In this paper, we proposed atrous multiscale networks (AMNet) as a deep-learning based solution to the problem of stereo disparity estimation. We proposed an atrous multiscale (AM) module that aggregates contextual features at multiple scales without the need for conventional downsampling and upsampling operations adopted by previous hour-glass modules. The AM module is used in feature extraction to aggregate the features extracted by our proposed depthwise separable residual network. We proposed an extended cost volume (ECV) to aggregate different disparity costs for a more accurate estimation. We also show how several AM modules can be stacked together with shortcut connections to form the stacked atrous multiscale (SAM) module which we use for fusion of the different volumes in the ECV and for cost aggregation at multiple scales. We also proposed the iterative multitask training of the foreground-background aware AMNet (FBA-AMNet) to learn the auxiliary task of foreground background segmentation for providing attention to the foreground-background transitions. Comparisons between FBA-AMNet and and AMNet throughout this paper confirm this benefit, and the FBA-AMNet also performs better than prior art that used class-based semantic segmentation. Our method ranked first on the KITTI stereo 2015 leaderboard at the time we submitted our test results, and performed better than previously published state-of-the-art methods on SceneFlow, KITTI stereo 2012, and Middlebury benchmarks most popular disparity estimation benchmarks.
In our future works, we plan to deploy the proposed SAM networks for other tasks such as single-image depth estimation and semantic segmentation, as they showed a clear benefit over the previous state-of-art approaches.
-  D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” International journal of computer vision, vol. 47, no. 1-3, pp. 7–42, 2002.
-  H. Hirschmuller and D. Scharstein, “Evaluation of stereo matching costs on images with radiometric differences,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 9, pp. 1582–1599, Sep. 2009.
-  Z. Guo, L. Zhang, and D. Zhang, “A completed modeling of local binary pattern operator for texture classification,” IEEE Transactions on Image Processing, vol. 19, no. 6, pp. 1657–1663, 2010.
-  V. D. Nguyen, D. D. Nguyen, S. J. Lee, and J. W. Jeon, “Local density encoding for robust stereo matching,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 24, no. 12, pp. 2049–2062, Dec 2014.
-  A. Hosni, C. Rhemann, M. Bleyer, C. Rother, and M. Gelautz, “Fast cost-volume filtering for visual correspondence and beyond,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 2, pp. 504–511, 2013.
-  H. Hirschmuller, “Stereo processing by semiglobal matching and mutual information,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 2, pp. 328–341, Feb 2008.
-  V. Kolmogorov and R. Zabih, “What energy functions can be minimizedvia graph cuts?” IEEE Transactions on Pattern Analysis & Machine Intelligence, no. 2, pp. 147–159, 2004.
-  N. Mayer, E. Ilg, P. Häusser, P. Fischer, D. Cremers, A. Dosovitskiy, and T. Brox, “A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 2016, arXiv:1512.02134. [Online]. Available: http://lmb.informatik.uni-freiburg.de/Publications/2016/MIFDB16
-  A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the kitti vision benchmark suite,” in Conference on Computer Vision and Pattern Recognition (CVPR), 2012.
-  M. Menze and A. Geiger, “Object scene flow for autonomous vehicles,” in Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
-  D. Scharstein, H. Hirschmüller, Y. Kitajima, G. Krathwohl, N. Nešić, X. Wang, and P. Westling, “High-resolution stereo datasets with subpixel-accurate ground truth,” in German conference on pattern recognition. Springer, 2014, pp. 31–42.
-  K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” arXiv preprint arXiv:1512.03385, 2015.
-  K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” CoRR, vol. abs/1409.1556, 2014.
-  J. Zbontar, Y. LeCun et al., “Stereo matching by training a convolutional neural network to compare image patches.” Journal of Machine Learning Research, vol. 17, no. 1-32, p. 2, 2016.
-  A. Dosovitskiy, P. Fischer, E. Ilg, P. Husser, C. Haz, V. Golkov, P. v.d. Smagt, D. Cremers, and T. Brox, “Flownet: Learning optical flow with convolutional networks,” in IEEE International Conference on Computer Vision (ICCV), Dec 2015.
-  W. Luo, A. G. Schwing, and R. Urtasun, “Efficient deep learning for stereo matching,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016, pp. 5695–5703.
-  Z. Junming, K. A. Skinner, R. Vasudevan, and M. Johnson-Roberson, “Dispsegnet: Leveraging semantics for end-to-end learning of disparity estimation from stereo imagery,” IEEE Robotics and Automation Letters, 2019.
-  X. Cheng, P. Wang, and R. Yang, “Depth estimation via affinity learned with convolutional spatial propagation network,” in The European Conference on Computer Vision (ECCV), September 2018.
-  J.-R. Chang and Y.-S. Chen, “Pyramid stereo matching network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 5410–5418.
-  G. Yang, H. Zhao, J. Shi, Z. Deng, and J. Jia, “SegStereo: exploiting semantic information for disparity estimation,” in The European Conference on Computer Vision (ECCV), September 2018.
-  C. Farabet, C. Couprie, L. Najman, and Y. LeCun, “Learning hierarchical features for scene labeling,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 8, pp. 1915–1929, Aug 2013.
-  D. Eigenand and R.Fergus, “Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture,” in ICCV, 2015.
-  G. Lin, C. Shen, and van den Hengel, “Efficient piecewise training of deep structured models for semantic segmentation,” in CVPR, 2016.
-  P. Pinheiro and R. Collobert, “Recurrent convolutional neural networks for scene labeling,” in ICML, 2014.
-  P. Krahenbuhl and V. Koltun, “Efficient inference in fully connected crfs with gaussian edge potentials,” in NIPS, 2011.
-  A. Adams, J. Baek, and M. Davis, “Fast high-dimensional filtering using the permutoheral lattice,” in Eurographics, 2010.
-  K. He, X. Zhang, S. Ren, and J. Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition,” in Computer Vision – ECCV 2014, D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, Eds. Cham: Springer International Publishing, 2014, pp. 346–361.
-  H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid scene parsing network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
-  L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs,” arXiv:1606.00915, 2016.
-  L.-C. Chen, Y. Yang, J. Wang, W. Xu, and A. L. Yuille, “Attention to scale: Scale-aware semantic image segmentation,” in CVPR, 2016.
-  A. Newell, K. Yang, and J. Deng, “Stacked hourglass networks for human pose estimation,” in ECCV, 2016, pp. 483–499.
-  L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, “Encoder-decoder with atrous separable convolution for semantic image segmentation,” in ECCV, 2018.
-  A. Kendall, H. Martirosyan, S. Dasgupta, P. Henry, R. Kennedy, A. Bachrach, and A. Bry, “End-to-end learning of geometry and context for deep stereo regression,” CoRR, vol. abs/1703.04309, 2017.
-  X. Du, M. El-Khamy, J. Lee, and L. Davis, “Fused dnn: A deep neural network fusion approach to fast and robust pedestrian detection,” in 2017 IEEE winter conference on applications of computer vision (WACV). IEEE, 2017, pp. 953–961.
-  X. Song, X. Zhao, H. Hu, and L. Fang, “Edgestereo: A context integrated residual pyramid network for stereo matching,” in ACCV, 2018.
-  D. Eigen, C. Puhrsch, and R. Fergus, “Depth map prediction from a single image using a multi-scale deep network,” in Advances in neural information processing systems, 2014, pp. 2366–2374.
-  F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 1800–1807.
-  M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “Mobilenetv2: Inverted residuals and linear bottlenecks,” in CVPR, 2018.
-  J. Dai, H. Qi, Y. Li, G. Zhang, H. Hu, and Y. Wei, “Deformable convolutional networks â coco detection and segmentation challenge 2017 entry,” in ICCV COCO Challenge Workshop, 2017.
-  H. Ren, M. El-Khamy, and J. Lee, “DN-ResNet: Efficient deep residual network for image denoising,” in 14th Asian Conference on Computer Vision (ACCV 2018), 2018.
-  F. Yu and V. Koltun, “Multi-scale context aggregation by dilated convolutions,” in ICLR, 2016.
-  S. Tulyakov, A. Ivanov, and F. Fleuret, “Practical deep stereo (pds): Toward applications-friendly deep stereo matching,” NIPS, 2018.