RotationNet: Joint Object Categorization and Pose Estimation Using Multiviews from Unsupervised Viewpoints

RotationNet: Joint Object Categorization and Pose Estimation
Using Multiviews from Unsupervised Viewpoints

Asako Kanezaki kanezaki.asako@aist.go.jp National Institute of Advanced Industrial Science and Technology (AIST) Yasuyuki Matsushita yasumat@ist.osaka-u.ac.jp Graduate School of Information Science and Technology, Osaka University Yoshifumi Nishida y.nishida@aist.go.jp National Institute of Advanced Industrial Science and Technology (AIST)
Abstract

We propose a Convolutional Neural Network (CNN)-based model “RotationNet,” which takes multi-view images of an object as input and jointly estimates its pose and object category. Unlike previous approaches that use known viewpoint labels for training, our method treats the viewpoint labels as latent variables, which are learned in an unsupervised manner during the training using an unaligned object dataset. RotationNet is designed to use only a partial set of multi-view images for inference, and this property makes it useful in practical scenarios where only partial views are available. Moreover, our pose alignment strategy enables one to obtain view-specific feature representations shared across classes, which is important to maintain high accuracy in both object categorization and pose estimation. Effectiveness of RotationNet is demonstrated by its superior performance to the state-of-the-art methods of 3D object classification on - and -class ModelNet datasets. We also show that RotationNet, even trained without known poses, achieves the state-of-the-art performance on an object pose estimation dataset. The code is available on https://github.com/kanezaki/rotationnet

1 Introduction

Object classification accuracy can be enhanced by the use of multiple different views of a target object [4, 23]. Recent remarkable advances in image recognition and collection of 3D object models enabled the learning of multi-view representations of objects in various categories. However, in real-world scenarios, objects can often only be observed from limited viewpoints due to occlusions, which makes it difficult to rely on multi-view representations that are learned with the whole circumference. The desired property for the real-world object classification is that, when a viewer observes a partial set ( images) of the full multi-view images of an object, it should recognize from which directions it observed the target object to correctly infer the category of the object. It has been understood that if the viewpoint is known the object classification accuracy can be improved. Likewise, if the object category is known, that helps infer the viewpoint. As such, object classification and viewpoint estimation is a tightly coupled problem, which can best benefit from their joint estimation.

Figure 1: Illustration of the proposed method RotationNet. RotationNet takes a partial set ( images) of the full multi-view images of an object as input and predicts its object category by rotation, where the best pose is selected to maximize the object category likelihood. Here, viewpoints from which the images are observed are jointly estimated to predict the pose of the object.

We propose a new Convolutional Neural Network (CNN) model that we call RotationNet, which takes multi-view images of an object as input and predicts its pose and object category (Fig. 1). RotationNet outputs viewpoint-specific category likelihoods corresponding to all pre-defined discrete viewpoints for each image input, and then selects the object pose that maximizes the integrated object category likelihood. Whereas, at the training phase, RotationNet uses a complete set of multi-view images of an object captured from all the pre-defined viewpoints, for inference it is able to work with only a partial set of all the multi-view images – a single image at minimum – as input. This property is particularly important for object classification in real-world scenarios, where it is often difficult to place a camera at all the pre-defined viewpoints due to occlusions. In addition, RotationNet does not require the multi-view images to be provided at once but allows their sequential input and updates of the target object’s category likelihood. This property is suitable for applications that require on-the-fly classification with a moving camera.

The most representative feature of RotationNet is that it treats viewpoints where training images are observed as latent variables during the training (Fig. 2). This enables unsupervised learning of object poses using an unaligned object dataset; thus, it eliminates the need of preprocessing for pose normalization that is often sensitive to noise and individual differences in shape. Our method automatically determines the basis axes of objects based on their appearance during the training and achieves not only intra-class but also inter-class object pose alignment. Inter-class pose alignment is important to deal with joint learning of object pose and category, because the importance of object classification lies in emphasizing differences in different categories when their appearances are similar. Without inter-class pose alignment, it may become an ill-posed problem to obtain a model to distinguish, e.g., a car and a bus if the side view of a car is compared with the frontal view of a bus.

Our main contributions are described as follows. We first show that RotationNet outperforms the current state-of-the-art classification performance on 3D object benchmark datasets consisting of - and -categories by a large margin (Table 5). Next, even though it is trained without the ground-truth poses, RotationNet achieves superior performance to previous works on an object pose estimation dataset. We also show that our model generalizes well to a real-world image dataset that was newly created for the general task of multi-view object classification.111We will make the dataset publicly available after the peer review. Finally, we train RotationNet with the new dataset named MIRO and demonstrate the performance of real-world applications using a moving USB camera or a head-mounted camera (Microsoft HoloLens).

2 Related work

There are two main approaches for the CNN-based 3D object classification: voxel-based and 2D image-based approaches. The earliest work on the former approach is 3D ShapeNets [39], which learns a Convolutional Deep Belief Network that outputs probability distributions of binary occupancy voxel values. Latest works on similar approaches showcased improved performance [21, 20, 38]. Even when working with 3D objects, 2D image-based approaches are shown effective for general object recognition tasks. Su et al[34] proposed multi-view CNN (MVCNN), which takes multi-view images of an object captured from surrounding virtual cameras as input and outputs the object’s category label. Multi-view representations are also used for 3D shape retrieval [1]. Qi et al[25] gives a comprehensive study on the voxel-based CNNs and multi-view CNNs for 3D object classification. Other than those above, point-based approach [11, 24, 15] is recently drawing much attention; however, the performance on 3D object classification is yet inferior to those of multi-view approaches. The current state-of-the-art result on the ModelNet40 benchmark dataset is reported by Wang et al[37], which is also based on the multi-view approach.

Because MVCNN integrates multi-views in a view-pooling layer which lies in the middle of the CNN, it requires a complete set of multi-view images recorded from all the pre-defined viewpoints for object inference. Unlike MVCNN, our method is able to classify an object using a partial set of multi-view images that may be sequentially observed by a moving camera. Elhoseiny et al[9] explored CNN architectures for joint object classification and pose estimation learned with multi-view images. Whereas their method takes a single image as input for its prediction, we mainly focus on how to aggregate predictions from multiple images captured from different viewpoints.

Viewpoint estimation is significant in its role in improving object classification. Better performance was achieved on face identification [45], human action classification [7], and image retrieval [36] by generating unseen views after observing a single view. These methods “imagine” the appearance of objects’ unobserved profiles, which is innately more uncertain than using real observations. Sedaghat et al[29] proposed a voxel-based CNN that outputs orientation labels as well as classification labels and demonstrated that it improved 3D object classification performance.

All the methods mentioned above assume known poses in training samples; however, object poses are not always aligned in existing object databases. Novotny et al[22] proposed a viewpoint factorization network that utilizes relative pose changes within each sequence to align objects in videos in an unsupervised manner. Our method also aligns object poses via unsupervised viewpoint estimation, where viewpoints of images are treated as latent variables during the training. Here, viewpoint estimation is learned in an unsupervised manner to best promote the object categorization task. In such a perspective, our method is related to Zhou et al[44], where view synthesis is trained as the “meta”-task to train multi-view pose networks by utilizing the synthesized views as the supervisory signal.

Although joint learning of object classification and pose estimation has been widely studied [28, 19, 42, 2, 35], inter-class pose alignment has drawn little attention. However, it is beneficial to share view-specific appearance information across classes to simultaneously solve for object classification and pose estimation. Kuznetsova et al[17] pointed out this issue and presented a metric learning approach that shares visual components across categories for simultaneous pose estimation and class prediction. Our method also uses a model with view-specific appearances that are shared across classes; thus, it is able to maintain high accuracy for both object classification and pose estimation.

Figure 2: Illustration of the training process of RotationNet, where the number of views is 3 and the number of categories is 2. A training sample consists of images of an unaligned object and its category label . For each input image, our CNN (RotationNet) outputs histograms with bins whose norm is 1. The last bin of each histogram represents the “incorrect view” class, which serves as a weight of how likely the histogram does not correspond to each viewpoint variable. According to the histogram values, we decide which image corresponds to views 1, 2, and 3. There are three candidates for view rotation: (1, 2, 3), (2, 3, 1), and (3, 1, 2). For each candidate, we calculate the score for the ground-truth category (“car” in this case) by multiplying the histograms and selecting the best choice: (2, 3, 1) in this case. Finally, we update the CNN parameters in a standard back-propagation manner with the estimated viewpoint variables.

3 Proposed method

The training process of RotationNet is illustrated in Fig. 2. We assume that multi-view images of each training object instance are observed from all the pre-defined viewpoints. Let be the number of the pre-defined viewpoints and denote the number of target object categories. A training sample consists of images of an object and its category label . We attach a viewpoint variable to each image and set it to when the image is observed from the -th viewpoint, i.e., . In our method, only the category label is given during the training whereas the viewpoint variables are unknown, namely, are treated as latent variables that are optimized in the training process.

RotationNet is defined as a differentiable multi-layer neural network . The final layer of RotationNet is the concatenation of softmax layers, each of which outputs the category likelihood where for each image . Here, denotes an estimate of the object category label for . For the training of RotationNet, we input the set of images simultaneously and solve the following optimization problem:

(1)

The parameters of and latent variables are optimized to output the highest probability of for the input of multi-view images .

Now, we describe how we design outputs. First of all, the category likelihood should become close to one when the estimated is correct; in other words, the image is truly captured from the -th viewpoint. Otherwise, in the case that the estimated is incorrect, may not necessarily be high because the image is captured from a different viewpoint. As described above, we decide the viewpoint variables according to the outputs as in (1). In order to obtain a stable solution of in (1), we introduce an “incorrect view” class and append it to the target category classes. Here, the “incorrect view” class plays a similar role to the “background” class for object detection tasks, which represents negative samples that belong to a “non-target” class. Then, RotationNet calculates by applying softmax functions to the -dimensional outputs, where . Note that , which corresponds to the probability that the image belongs to the “incorrect view” class for the -th viewpoint, indicates how likely it is that the estimated viewpoint variable is incorrect.

Based on the above discussion, we substantiate (1) as follows. For the purpose of loss calculation, we generate the target value of based on the current estimation of . Letting denote a matrix composed of for all the viewpoints and classes, the target value of in the case that is correctly estimated is defined as follows:

(2)

In this way, (1) can be rewritten as the following cross-entropy optimization problem:

(3)

If we fix here, the above can be written as a subproblem of optimizing as follows:

(4)

where the parameters of can be iteratively updated via standard back-propagation of softmax losses. Since are not constant but latent variables that need to be optimized during the training of , we employ alternating optimization of and . More specifically, in every iteration, our method determines according to obtained via forwarding of (fixed) , and then update according to the estimated by fixing them.

The latent viewpoint variables are determined by solving the following problem:

(5)

in which the conversion used the fact that is constant w.r.t. . Because the number of candidates for is limited, we calculate the evaluation value of (5) for all the candidates and take the best choice. The decision of in this way emphasizes view-specific features for object categorization, which contributes to the self-alignment of objects in the dataset.

In the inference phase, RotationNet takes as input images of a test object instance, either simultaneously or sequentially, and outputs probabilities. Finally, it integrates the outputs to estimate the category of the object and the viewpoint variables as follows:

(6)

Similarly to the training phase, we decide according to the outputs . Thus RotationNet is able to estimate the pose of the object as well as its category label.

Viewpoint setups for training

While choices of the viewpoint variables can be arbitrary, we consider two setups in this paper, with and without an upright orientation assumption, similarly to MVCNN [34]. The former case is often useful with images of real objects captured with one-dimensional turning tables, whereas the latter case is rather suitable for unaligned 3D models. We also consider the third case that is also based on the upright orientation assumption (as the first case) but with multiple elevation levels. We illustrate the three viewpoint setups in Fig. 3.

case (i) case (ii) case (iii)
Figure 3: Illustration of three viewpoint setups considered in this work. A target object is placed on the center of each circle.

Case (i): with upright orientation   In the case where we assume upright orientation, we fix a specific axis as the rotation axis (e.g., the -axis), which defines the upright orientation, and then place viewpoints at intervals of the angle around the axis, elevated by (set to in this paper) from the ground plane. We set in default, which yields views for an object (). We define that “view ” is obtained by rotating the view position “view ” by the angle about the -axis. Note that the view obtained by rotating “view ” by the angle about the -axis corresponds to “view 1.” We assume the sequence of input images is consistent with respect to a certain direction of rotation in the training phase. For instance, if is , then is . Thus the number of candidates for all the viewpoint variables is .

Case (ii): w/o upright orientation   In the case where we do not assume upright orientation, we place virtual cameras on the vertices of a dodecahedron encompassing the object. This is because a dodecahedron has the largest number of vertices among regular polyhedra, where viewpoints can be completely equally distributed in 3D space. Unlike case (i), where there is a unique rotation direction, there are three different patterns of rotation from a certain view, because three edges are connected to each vertex of a dodecahedron. Therefore, the number of candidates for all the viewpoint variables is ()222A dodecahedron has 60 orientation-preserving symmetries..

Case (iii): with upright orientation and multiple elevation levels   This case is an extension of case (i). Unlike case (i) where the elevation angle is fixed, we place virtual cameras at intervals of in . There are viewpoints, where and . As with the case (i), the number of candidates for all the viewpoint variables is due to the upright orientation assumption.

4 Experiments

In this section, we show the results of the experiments with 3D model benchmark datasets (Sec. 4.1), a real image benchmark dataset captured with a one-dimensional turning table (Sec. 4.2), and our new dataset consisting of multi-view real images of objects viewed with two rotational degrees of freedom (Sec. 4.3). The baseline architecture of our CNN is based on AlexNet [16], which is smaller than the VGG-M network architecture that MVCNN [34] used. To train RotationNet, we fine-tune the weights pre-trained using the ILSVRC 2012 dataset [27]. We used classical momentum SGD with a learning rate of and a momentum of for optimization.

As a baseline method, we also fine-tuned the pre-trained weights of a standard AlexNet CNN that only predicts object categories. To aggregate the predictions of multi-view images, we summed up all the scores obtained through the CNN. This method can be recognized as a modified version of MVCNN [34], where the view-pooling layer is placed after the final softmax layer. We chose average pooling for the view-pooling layer in this setting of the baseline, because we observed that the performance was better than that with max pooling. We also implemented MVCNN [34] based on the AlexNet architecture with the original view-pooling layer for a fair comparison.

4.1 Experiment on 3D model datasets

We first describe the experimental results on two 3D model benchmark datasets, ModelNet10 and ModelNet40 [39]. ModelNet10 consists of 4,899 object instances in 10 categories, whereas ModelNet40 consists of 12,311 object instances in 40 categories. First, we show the change of object classification accuracy versus the number of views used for prediction in cases (i) and (ii) with ModelNet40 and ModelNet10, respectively, in Fig. 4 (a)-(b) and Fig. 4 (d)-(e). For fair comparison, we used the same training and test split of ModelNet40 as in [39] and [34]. We prepared multi-view images (i) with the upright orientation assumption and (ii) without the upright orientation assumption using the rendering software published in [34]. Here, we show the average scores of trials with randomly selected multi-view sets. In Figs. 4 (a) and 4 (d), which show the results with ModelNet40, we also draw the scores with the original MVCNN using Support Vector Machine (SVM) reported in [34]. Interestingly, as we focus on the object classification task whereas Su et al[34] focused more on object retrieval task, we found that the baseline method with late view-pooling is slightly better in this case than the original MVCNN with the view-pooling layer in the middle. The baseline method does especially well with ModelNet10 in case (i) (Fig. 4 (b)), where it achieves the best performance among the methods. With ModelNet40 in case (i) (Fig. 4 (a)), RotationNet achieved a comparable result with MVCNN when we used all the views as input. In case (ii) (Figs. 4 (d) and (e)), where we consider full 3D rotation, RotationNet demonstrated superior performance to other methods. Only with three views, it showed comparable performance to that of MVCNN with a full set ( views) of multi-view images.

Next, we investigate the performance of RotationNet with three different architectures: AlexNet [16], VGG-M [6], and ResNet-50 [12]. Table 1 shows the classification accuracy on ModelNet40 and ModelNet10. Because we deal with discrete viewpoints, we altered different camera system orientations (similarly to [8]) and calculated the mean and maximum accuracy of those trials. Surprisingly, the performance difference among different architectures is marginal compared to the difference caused by different camera system orientations. It indicates that the placement of viewpoints is the most important factor in multiview-based 3D object classification. See Sec. B for more details.

ModelNet40 ModelNet10
Archit. Mean Max Mean Max
AlexNet 93.70 1.07 96.39 94.52 1.01 97.58
VGG-M 94.68 1.16 97.37 94.82 1.17 98.46
ResNet-50 94.77 1.10 96.92 94.80 0.96 97.80
Table 1: Comparison of classification accuracy (%) with RotationNet based on different architectures.

Finally, we summarize the comparison of classification accuracy on ModelNet40 and ModelNet10 to existing 3D object classification methods in Table 5333We do not include the scores of “VRN Ensemble” [5] using ensembling technique because is written in  [5] “we suspect that this result is not general, and do not claim it with our main results.” The reported scores are 95.54% with ModelNet40 and 97.14% with ModelNet10, which are both outperformed by RotationNet with any architecture (see Table 1). . RotationNet (with VGG-M architecture) significantly outperformed existing methods with both the ModelNet40 and ModelNet10 datasets. We reported the maximum accuracy among the aforementioned 11 rotation trials. Note that the average accuracy of those trials on ModelNet40 was 94.68%, which is still superior to the current state-of-the-art score 93.8% reported by Wang et al[37]. Besides, Wang et al[37] used additional feature modalities: surface normals and normalized depth values to improve the performance by %.

(a) ModelNet40, case (i) (b) ModelNet10, case (i) (c) MIRO, case (i) (d) ModelNet40, case (ii) (e) ModelNet10, case (ii) (f) MIRO, case (ii)
Figure 4: Classification accuracy vs. number of views used for prediction. From left to right are shown the results on ModelNet40, ModelNet10, and our new dataset MIRO. The results in case (i) are shown in top and those in case (ii) are shown in bottom. See Table 5 for an overall performance comparison to existing methods on ModelNet40 and ModelNet10.
Algorithm class view MVCNN (softmax) 86.08 - Baseline 88.73 - Fine-grained, =300 81.23 26.94 Fine-grained, =4K 76.95 31.96 RotationNet 89.31 33.59
Table 2: Accuracy of classification and viewpoint estimation (%) in case (i) with RGBD.
Algorithm class view MVCNN (softmax) 94.17 - Baseline 95 - Fine-grained, =800 92.76 56.72 Fine-grained, =4K 91.35 58.33 RotationNet 98.33 85.83
Table 3: Accuracy of classification and viewpoint estimation (%) in case (i) with MIRO.
Algorithm class view MVCNN (softmax) 95 - Baseline 95.83 - Fine-grained, =1.1K 94.21 70.63 Fine-grained, =2.6K 93.54 72.38 RotationNet 99.17 75.67
Table 4: Accuracy of classification and viewpoint estimation (%) in case (ii) with MIRO.
Algorithm ModelNet40 ModelNet10
RotationNet 97.37 98.46
Dominant Set Clustering [37] 93.8 -
Kd-Networks [15] 91.8 94.0
MVCNN-MultiRes [25] 91.4 -
ORION [29] - 93.80
VRN [5] 91.33 93.61
FusionNet [13] 90.80 93.11
Pairwise [14] 90.70 92.80
PANORAMA-NN [30] 90.7 91.1
MVCNN [34] 90.10 -
Set-convolution [26] 90 -
FPNN [20] 88.4 -
Multiple Depth Maps [41] 87.8 91.5
LightNet [43] 86.90 93.39
PointNet [24] 86.2 -
Geometry Image [33] 83.9 88.4
3D-GAN [38] 83.30 91.00
ECC [32] 83.2 -
GIFT [1] 83.10 92.35
VoxNet [21] 83 92
Beam Search [40] 81.26 88
DeepPano [31] 77.63 85.45
3DShapeNets [39] 77 83.50
PointNet [11] - 77.6
Table 5: Comparison of classification accuracy (%). RotationNet achieved the state-of-the-art performance both with ModelNet40 and ModelNet10.

4.2 Experiment on a real image benchmark dataset

Next, we describe the experimental results on a benchmark RGBD dataset published in [18], which consists of real images of objects on a one-dimensional rotation table. This dataset contains object instances in categories. Although it contains depth images and 3D point clouds, we used only RGB images in our experiment. We applied the upright orientation assumption (case (i)) in this experiment, because the bottom faces of objects on the turning table were not recorded. We picked out images of each object instance with the closest rotation angles to . In the training phase, objects are self-aligned (in an unsupervised manner) and the viewpoint variables for images are determined. To predict the pose of a test object instance, we predict the discrete viewpoint that each test image is observed, and then refer the most frequent pose value among those attached to the training samples predicted to be observed from the same viewpoint.

Table 4.1 summarizes the classification and viewpoint estimation accuracies. The baseline method and MVCNN are not able to estimate viewpoints because they are essentially viewpoint invariant. As another baseline approach to compare, we learned a CNN with AlexNet architecture that outputs scores to distinguish both viewpoints and categories, which we call “Fine-grained.” Here, denotes the number of iterations that the CNN parameters are updated in the training phase. As shown in Table 4.1, the classification accuracy with “Fine-grained” decreases while its viewpoint estimation accuracy improves as the iteration grows. We consider this is because the “Fine-grained” classifiers become more and more sensitive to intra-class appearance variation through training, which affects the categorization accuracy. In contrast, RotationNet demonstrated the best performance in both object classification and viewpoint estimation, although the ground-truth poses are not given to RotationNet during the training.

Table 6 shows the pose estimation error comparison to existing methods, where “NN” is a simple nearest neighbor regressor. “Med” indicates the median error and “Ave” indicates the average error. “Med(C)” and “Ave(C)” are computed only on test images that were assigned the correct category by the system, whereas “Med(I)” and “Ave(I)” are computed only on those that were assigned the correct instance by the system. Even though it is only able to predict discrete poses, RotationNet achieved the best performance.

Med Med(C) Med(I) Ave Ave(C) Ave(I)
NN 144.0 105.1 33.5 109.6 98.8 62.6
Indep Tree [3] 73.3 62.1 44.6 89.3 81.4 63.0
OPTree [19] 62.6 51.5 30.2 83.7 77.7 57.1
HMP [3] 20.0 18.7 18.0 53.6 47.5 44.8
RotationNet 18.9 16.7 15.8 51.7 39.4 37.3
Table 6: Comparison of pose estimation error (in degrees). RotationNet outperformed all the other methods.

4.3 Experiment on a 3D rotated real image dataset

Figure 5: Object instances self-aligned as a result of training RotationNet. The last instance in each category is a 3D CAD model.
Figure 6: Exemplar objects recognized using a USB camera. The second and fifth rows show 3D models in the estimated category and pose from a single view, whereas the third and sixth rows show those estimated using multiple views. The number in each image indicates the number of views used for predictions. Failure cases are shown in red boxes.

We describe the experimental results on our new dataset “Multi-view Images of Rotated Objects (MIRO)” in this section. Exemplar images in MIRO dataset are shown in Sec. E. We used Ortery’s 3D MFP studio444https://www.ortery.com/photography-equipment/3d-modeling/ to capture multi-view images of objects with 3D rotations. The RGBD benchmark dataset [18] has two issues for training multi-view based CNNs: insufficient number of object instances per category (which is a minimum of two for training) and inconsistent cases to the upright orientation assumption. There are several cases where the upright orientation assumption is actually invalid; the attitudes of object instances against the rotation axis are inconsistent in some object categories. Also, this dataset does not include the bottom faces of objects on the turning table. Our MIRO dataset includes object instances per object category. It consists of object instances in categories in total. We captured each object instance with levels of elevation angles and levels of azimuth angles to obtain images. For our experiments, we used images () with elevation of an object instance in case (i). We carefully captured all the object instances in each category to have the same upright direction in order to evaluate performance in the case (i). For case (ii), we used images observed from the vertices of a dodecahedron encompassing an object.

Figures 4 (c) and 4 (f) show the object classification accuracy versus the number of views used for the prediction in case (i) and case (ii), respectively. In both cases, RotationNet clearly outperforms both MVCNN and the baseline method when the number of views is larger than . We also tested the “Fine-grained” method that outputs scores in case (i) and scores in case (ii) to distinguish both viewpoints and categories, and the overall results are summarized in Tables 4.1 and 4.1. Similar to the results with an RGBD dataset described above, there is a trade-off between object classification and viewpoint estimation accuracies in the “Fine-grained” approach. RotationNet achieved the best performance in both object classification and viewpoint estimation, which demonstrates the strength of the proposed approach.

Finally, we demonstrate the performance of RotationNet for real-world applications. For training, we used our MIRO dataset with the viewpoint setup case (iii), where all the outputs for images with levels of elevation angles are concatenated, which enables RotationNet to distinguish viewpoints. We added rendered images of a single 3D CAD model (whose upright orientation is manually assigned) to each object class, which were trained together with MIRO dataset. Then we obtained successful alignments between a CAD model and real images for all the 12 object classes (Fig. 5). Figure 6 shows exemplar objects recognized using a USB camera. We estimated relative camera poses by LSD-SLAM [10] to integrate predictions from multiple views in sequence. The results obtained using multiple views (shown in the third and sixth rows) are consistently more accurate than those using a single view (shown in the second and fifth rows). It is worth noting that not only object classification but also pose estimation performance is improved by using multiple views.

5 Discussion

We proposed RotationNet, which jointly estimates object category and viewpoint from each single-view image and aggregates the object class predictions obtained from a partial set of multi-view images. In our method, object instances are automatically aligned in an unsupervised manner with both inter-class and intra-class structures based on their appearance during the training. In the experiment using 3D object benchmark datasets ModelNet40 and ModelNet10, RotationNet significantly outperformed the state-of-the-art methods based on voxels, point clouds, and multi-view images. RotationNet is also able to achieve comparable performance to MVCNN [34] with different multi-view images using only a couple of view images, which is important for real-world applications. Another contribution is that we developed a publicly available new dataset named MIRO. Using this dataset and RGBD object benchmark dataset [18], we showed that RotationNet even outperformed supervised learning based approaches in a pose estimation task. We consider that our pose estimation performance benefits from view-specific appearance information shared across classes due to the inter-class self-alignment.

Similar to MVCNN [34] and any other 3D object classification method that considers discrete variance of rotation, RotationNet has the limitation that each image should be observed from one of the pre-defined viewpoints. The discrete pose estimation by RotationNet, however, demonstrated superior performance to existing methods on the RGBD object benchmark dataset. It can be further improved by introducing a fine pose alignment post-process using e.g. iterative closest point (ICP) algorithm. Another potential avenue to look into is the automatic selection of the best camera system orientations, since it has an effect on object classification accuracy.

References

  • [1] S. Bai, X. Bai, Z. Zhou, Z. Zhang, and L. J. Latecki. Gift: A real-time and scalable 3d shape search engine. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
  • [2] A. Bakry and A. Elgammal. Untangling object-view manifold for multiview recognition and pose estimation. In Proceedings of European Conference on Computer Vision (ECCV), 2014.
  • [3] L. Bo, X. Ren, and D. Fox. Unsupervised feature learning for rgb-d based object recognition. In Proceedings of International Symposium on Experimental Robotics (ISER), 2013.
  • [4] H. Borotschnig, L. Paletta, M. Prantl, and A. Pinz. Appearance-based active object recognition. Image and Vision Computing, 18(9), 2000.
  • [5] A. Brock, T. Lim, J. Ritchie, and N. Weston. Generative and discriminative voxel modeling with convolutional neural networks. In Proceedings of NIPS Workshop on 3D Deep Learning, 2017.
  • [6] K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman. Return of the devil in the details: Delving deep into convolutional nets. In Proceedings of British Machine Vision Conference (BMVC), 2014.
  • [7] C.-Y. Chen and K. Grauman. Inferring unseen views of people. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014.
  • [8] D.-Y. Chen, X.-P. Tian, Y.-T. Shen, and M. Ouhyoung. On visual similarity based 3D model retrieval. Computer Graphics Forum, 22(3), 2003.
  • [9] M. Elhoseiny, T. El-Gaaly, A. Bakry, and A. Elgammal. A comparative analysis and study of multiview cnn models for joint object categorization and pose estimation. In Proceedings of International Conference on Machine Learning (ICML), 2016.
  • [10] J. Engel, T. Schöps, and D. Cremers. Lsd-slam: Large-scale direct monocular slam. In Proceedings of European Conference on Computer Vision (ECCV), 2014.
  • [11] A. Garcia-Garcia, F. Gomez-Donoso, J. Garcia-Rodriguez, S. Orts-Escolano, M. Cazorla, and J. Azorin-Lopez. Pointnet: A 3d convolutional neural network for real-time object class recognition. In Proceedings of IEEE International Joint Conference on Neural Networks (IJCNN), 2016.
  • [12] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
  • [13] V. Hegde and R. Zadeh. Fusionnet: 3d object classification using multiple data representations. arXiv preprint arXiv:1607.05695, 2016.
  • [14] E. Johns, S. Leutenegger, and A. J. Davison. Pairwise decomposition of image sequences for active multi-view recognition. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
  • [15] R. Klokov and V. Lempitsky. Escape from cells: Deep kd-networks for the recognition of 3d point cloud models. In Proceedings of International Conference on Computer Vision (ICCV), 2017.
  • [16] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Proceedings of Advances in Neural Information Processing Systems (NIPS), 2012.
  • [17] A. Kuznetsova, S. J. Hwang, B. Rosenhahn, and L. Sigal. Exploiting view-specific appearance similarities across classes for zero-shot pose prediction: A metric learning approach. In Proceedings of AAAI Conference on Artificial Intelligence, 2016.
  • [18] K. Lai, L. Bo, X. Ren, and D. Fox. A large-scale hierarchical multi-view rgb-d object dataset. In Proceedings of IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2011.
  • [19] K. Lai, L. Bo, X. Ren, and D. Fox. A scalable tree-based approach for joint object and pose recognition. In Proceedings of AAAI Conference on Artificial Intelligence, 2011.
  • [20] Y. Li, S. Pirk, H. Su, C. R. Qi, , and L. J. Guibas. Fpnn: Field probing neural networks for 3d data. In Proceedings of Advances in Neural Information Processing Systems (NIPS), 2016.
  • [21] D. Maturana and S. Scherer. Voxnet: A 3d convolutional neural network for real-time object recognition. In Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2015.
  • [22] D. Novotny, D. Larlus, and A. Vedaldi. Learning 3d object categories by looking around them. In Proceedings of International Conference on Computer Vision (ICCV), 2017.
  • [23] L. Paletta and A. Pinz. Active object recognition by view integration and reinforcement learning. Robotics and Autonomous Systems, 31(1), 2000.
  • [24] C. R. Qi, H. Su, K. Mo, and L. J. Guibas. Pointnet: Deep learning on point sets for 3D classification and segmentation. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • [25] C. R. Qi, H. Su, M. Niessner, A. Dai, M. Yan, and L. J. Guibas. Volumetric and multi-view CNNs for object classification on 3D data. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
  • [26] S. Ravanbakhsh, J. Schneider, and B. Poczos. Deep learning with sets and point clouds. arXiv preprint arXiv:1611.04500, 2016.
  • [27] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision, 115(3), 2015.
  • [28] S. Savarese and L. Fei-Fei. 3D generic object categorization, localization and pose estimation. In Proceedings of International Conference on Computer Vision (ICCV), 2007.
  • [29] N. Sedaghat, M. Zolfaghari, and T. Brox. Orientation-boosted voxel nets for 3D object recognition. In Proceedings of British Machine Vision Conference (BMVC), 2017.
  • [30] K. Sfikas, T. Theoharis, and I. Pratikakis. Exploiting the panorama representation for convolutional neural network classification and retrieval. In Proceedings of Eurographics Workshop on 3D Object Retrieval (3DOR), 2017.
  • [31] B. Shi, S. Bai, Z. Zhou, and X. Bai. Deeppano: Deep panoramic representation for 3-d shape recognition. IEEE Signal Processing Letters, 22(12), 2015.
  • [32] M. Simonovsky and N. Komodakis. Dynamic edge-conditioned filters in convolutional neural networks on graphs. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • [33] A. Sinha, J. Bai, and K. Ramani. Deep learning 3D shape surfaces using geometry images. In Proceedings of European Conference on Computer Vision (ECCV), 2016.
  • [34] H. Su, S. Maji, E. Kalogerakis, and E. G. Learned-Miller. Multi-view convolutional neural networks for 3D shape recognition. In Proceedings of International Conference on Computer Vision (ICCV), 2015.
  • [35] H. Su, C. R. Qi, Y. Li, and L. J. Guibas. Render for cnn: Viewpoint estimation in images using cnns trained with rendered 3D model views. In Proceedings of International Conference on Computer Vision (ICCV), 2015.
  • [36] H. Su, F. Wang, E. Yi, and L. J. Guibas. 3D-assisted feature synthesis for novel views of an object. In Proceedings of International Conference on Computer Vision (ICCV), 2015.
  • [37] C. Wang, M. Pelillo, and K. Siddiqi. Dominant set clustering and pooling for multi-view 3d object recognition. In Proceedings of British Machine Vision Conference (BMVC), 2017.
  • [38] J. Wu, C. Zhang, T. Xue, W. T. Freeman, and J. B. Tenenbaum. Learning a probabilistic latent space of object shapes via 3D generative-adversarial modeling. In Proceedings of Advances in Neural Information Processing Systems (NIPS), 2016.
  • [39] Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao. 3d shapenets: A deep representation for volumetric shapes. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
  • [40] X. Xu and S. Todorovic. Beam search for learning a deep convolutional neural network of 3d shapes. In Proceedings of International Conference on Pattern Recognition (ICPR), 2016.
  • [41] P. Zanuttigh and L. Minto. Deep learning for 3d shape classification from multiple depth maps. In Proceedings of IEEE International Conference on Image Processing (ICIP), 2017.
  • [42] H. Zhang, T. El-Gaaly, A. M. Elgammal, and Z. Jiang. Joint object and pose recognition using homeomorphic manifold analysis. In Proceedings of AAAI Conference on Artificial Intelligence, volume 2, 2013.
  • [43] S. Zhi, Y. Liu, X. Li, and Y. Guo. Lightnet: A lightweight 3D convolutional neural network for real-time 3D object recognition. In Proceedings of Eurographics Workshop on 3D Object Retrieval (3DOR), 2017.
  • [44] T. Zhou, M. Brown, N. Snavely, and D. G. Lowe. Unsupervised learning of depth and ego-motion from video. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • [45] Z. Zhu, P. Luo, X. Wang, and X. Tang. Multi-view perceptron: a deep model for learning face identity and view representations. In Proceedings of Advances in Neural Information Processing Systems (NIPS), 2014.

Appendix A Qualitative evaluation of self-alignment during the training of RotationNet

Figures 7 and 8 show the state transition of the inter- and intra-class object pose alignment that is automatically achieved during the training of RotationNet with ModelNet40, which depict the variation of the average images generated by concatenating multi-view images in order of their predicted viewpoint variables. The figures correspond to cases (i): with upright orientation and (ii): w/o upright orientation, respectively. We can see that the variance of average images decreases together with the variance of object poses. The red dotted lines show the mean variance of average images of all the classes, whereas the blue lines show the variance of average images of the “chair” class.

The images of test object instances in the “chair” class with the same predicted viewpoint variable are shown in the right of the figures. In both cases (i) and (ii), where the latter case is more interesting because of its difficulty, the chairs with initial random poses gradually get aligned in the same direction after several hundreds of training iterations. Moreover, the average images of all the 40 classes with the same predicted viewpoint variable shown in red boxes indicate that not only the intra-class alignment but also the inter-class alignment is achieved. The alignment is less obvious in the red boxes of Fig. 8; however, it is confirmed that this does not harm the object classification accuracy.

Figure 7: Variance change of the average images generated by concatenating multi-view images (case (i)) in order of their predicted viewpoint variables. Average images of 40 classes and images of the “chair” class with the same predicted viewpoint variable in iterations 0, 100, 200, and 400 are shown in red boxes and blue boxes, respectively.
Figure 8: Variance change of the average images generated by concatenating multi-view images (case (ii)) in order of their predicted viewpoint variables. Average images of 40 classes and images of the “chair” class with the same predicted viewpoint variable in iterations 0, 100, 200, and 500 are shown in red boxes and blue boxes, respectively.

Appendix B Influence of camera system orientation

As shown in Section 4.1 in the main manuscript, we tested the performance of RotationNet with different camera system orientations. Figure 9 shows exemplar multi-view images of a chair in ModelNet40 dataset captured in case (ii) with the camera system orientations. Although “aligned” ModelNet40 dataset has been recently released, we used the original “unaligned” ModelNet40 dataset in our work. Camera system orientations are first rotated by about the -axis, and then rotated by about the -axis. In this way, different camera system orientations can capture different object profiles. Table 7 shows the comparison of classification accuracy (%) on ModelNet40 and ModelNet10 with the different camera system orientations. We altered the base architecture of RotationNet as AlexNet, VGG-M, and ResNet-50. The best scores with each architecture among the orientations are shown in bold. As shown here, the best camera system orientation is consistent across different architectures: the second one for ModelNet10 and the fourth one for ModelNet40. It indicates that multi-view object classification is greatly improved by observing appropriate aspects of objects. In addition, Table 7 shows that the best performance on the validation set (which we extracted from the training split of ModelNet40) was achieved with the same camera system orientation as the test set. Therefore, it is possible to obtain the best RotationNet model by selecting the one that best classifies a validation set among different camera system orientations.

Figure 9: Exemplar multi-view images of a chair captured in case (ii) with different camera system orientations. Numbers in the left indicate the camera system orientation IDs.
Camera system orientation ID
Dataset Archit. 1 2 3 4 5 6 7 8 9 10 11 Mean
ModelNet40 - val AlexNet 93.03 94.33 95.14 95.54 92.63 92.46 92.38 92.95 92.79 92.79 92.87 93.35 1.06
AlexNet 93.03 94.04 95.22 96.39 93.35 92.99 92.79 93.15 92.99 93.31 93.48 93.70 1.07
ModelNet40 - test VGG-M 93.64 95.91 96.07 97.37 94.12 93.80 94.08 94.25 93.68 94.41 94.17 94.68 1.16
ResNet-50 93.76 96.64 95.91 96.92 94.08 93.68 94.45 94.29 94.45 94.29 94.00 94.77 1.10
AlexNet 94.16 97.58 93.94 94.38 94.60 93.61 94.05 93.94 94.38 94.71 94.38 94.52 1.01
ModelNet10 - test VGG-M 94.38 98.46 94.05 94.93 94.38 94.38 94.38 94.49 94.82 94.49 94.27 94.82 1.17
ResNet-50 94.82 97.80 94.49 94.49 94.16 94.60 94.38 94.60 94.27 94.71 94.49 94.80 0.96
Table 7: Comparison of classification accuracy (%) with different camera system orientations. The best scores with each RotationNet architecture among the orientations are shown in bold.

Appendix C Effectiveness of fine-tuning

Even when the input images are grayscale rendered images of 3D models, fine-tuning of the ImageNet pre-trained weights is effective. Figures 12 and 12 respectively show the training loss and the classification accuracy (%) on ModelNet40 using RotationNet trained w/ and w/o fine-tuning. Here, we used AlexNet as the baseline architecture of RotationNet. As shown in these figures, fine-tuned RotationNet converges earlier to the optimal one that is better than the model achieved without fine-tuning. It indicates that the ImageNet pre-trained weights capture general features of objects, which leads RotationNet to achieve reliable performance in the object classification task.

Figure 10: Comparison of training loss on ModelNet40 using RotationNet trained w/ and w/o fine-tuning.
Figure 11: Comparison of classification accuracy (%) on ModelNet40 using RotationNet trained w/ and w/o fine-tuning.
Figure 12: Classification accuracy (%) on ShapeNetCore55. We rotated a test model times and used the maximum scores.

Appendix D Sensitivity to pre-defined views assumption

Similar to existing 3D object classification methods (such as MVCNN) that consider discrete variance of rotation, RotationNet has the limitation that each image should be observed from one of the pre-defined viewpoints. To examine the sensitivity to pre-defined views assumption, we conducted an additional experiment with ShapeNetCore55 dataset555http://shapenet.cs.stanford.edu/shrec16/ which consists of 3D models in 55 object categories. We trained our model with aligned training dataset and tested the classification of unaligned (i.e., randomly rotated) models. The accuracy was whereas it was for aligned test models, which means our model is rather sensitive to pre-defined views. However, as shown in Fig. 12, the accuracy increases if we randomly rotate the test model times and use the maximum object scores. Moreover, when trained with unaligned dataset (as is the case with ModelNet dataset), we achieve a model that is much less sensitive to the viewpoint sampling; the accuracy in this case was for unaligned test models with .

Appendix E Object instances in MIRO dataset

Figure 13 shows thumbnail images of all the object instances in our new dataset MIRO. Our MIRO dataset includes object instances per object category. It consists of object instances in categories in total. Each object instance has images captured from different viewpoints approximately equally distributed in the spherical coordinates. An example of the multi-view images are shown in Fig. 14.

Figure 13: Thumbnail images of all the object instances in MIRO. From left to right are shown the object instances in “bus,” “car,” “cleanser,” “clock,” “cup,” “headphones,” “mouse,” “scissors,” “shoe,” “stapler,” “sunglasses,” and “tape_cutter” category.
Figure 14: All the images of an exemplar object instance captured from different viewpoints.

Appendix F Candidates for viewpoint variables in case (ii): w/o upright orientation

In the case where we do not assume upright orientation, we place virtual cameras on the vertices of a dodecahedron encompassing the object. There are three different patterns of rotation from a certain view, because three edges are connected to each vertex of a dodecahedron. Therefore, the number of candidates for all the viewpoint variables is (). Figures 15-23 show all the candidates for a set of viewpoint variables in this case, in which vertex and image IDs are shown on the top and bottom rows respectively. Here, indicates the ID of the vertex where the -th image of the object instance is observed. For instance, in Candidate #2 is (Fig. 15 (b)). The red star indicates the camera position where the first image is captured. The red dot indicates the camera position where the ninth image is captured.

(a) Candidate #1 (b) Candidate #2 (c) Candidate #3 (d) Candidate #4

Figure 15: Candidates #1-#4 for a set of viewpoint variables w/o upright orientation.

(a) Candidate #5 (b) Candidate #6 (c) Candidate #7 (d) Candidate #8 (e) Candidate #9 (f) Candidate #10 (g) Candidate #11

Figure 16: Candidates #5-#11 for a set of viewpoint variables w/o upright orientation.

(a) Candidate #12 (b) Candidate #13 (c) Candidate #14 (d) Candidate #15 (e) Candidate #16 (f) Candidate #17 (g) Candidate #18

Figure 17: Candidates #12-#18 for a set of viewpoint variables w/o upright orientation.

(a) Candidate #19 (b) Candidate #20 (c) Candidate #21 (d) Candidate #22 (e) Candidate #23 (f) Candidate #24 (g) Candidate #25

Figure 18: Candidates #19-#25 for a set of viewpoint variables w/o upright orientation.

(a) Candidate #26 (b) Candidate #27 (c) Candidate #28 (d) Candidate #29 (e) Candidate #30 (f) Candidate #31 (g) Candidate #32

Figure 19: Candidates #26-#32 for a set of viewpoint variables w/o upright orientation.

(a) Candidate #33 (b) Candidate #34 (c) Candidate #35 (d) Candidate #36 (e) Candidate #37 (f) Candidate #38 (g) Candidate #39

Figure 20: Candidates #33-#39 for a set of viewpoint variables w/o upright orientation.

(a) Candidate #40 (b) Candidate #41 (c) Candidate #42 (d) Candidate #43 (e) Candidate #44 (f) Candidate #45 (g) Candidate #46

Figure 21: Candidates #40-#46 for a set of viewpoint variables w/o upright orientation.

(a) Candidate #47 (b) Candidate #48 (c) Candidate #49 (d) Candidate #50 (e) Candidate #51 (f) Candidate #52 (g) Candidate #53

Figure 22: Candidates #47-#53 for a set of viewpoint variables w/o upright orientation.

(a) Candidate #54 (b) Candidate #55 (c) Candidate #56 (d) Candidate #57 (e) Candidate #58 (f) Candidate #59 (g) Candidate #60

Figure 23: Candidates #54-#60 for a set of viewpoint variables w/o upright orientation.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
10269
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description