Deep Learning for Spacecraft Pose Estimation from Photorealistic Rendering

Deep Learning for Spacecraft Pose Estimation from Photorealistic Rendering

Pedro F. Proença and Yang Gao The authors are with the Surrey Space Centre, Faculty of Engineering and Physical Sciences, University of Surrey, GU2 7XH Guildford, U.K. {p.proenca, yang.gao}@surrey.ac.uk
Abstract

On-orbit proximity operations in space rendezvous, docking and debris removal require precise and robust 6D pose estimation under a wide range of lighting conditions and against highly textured background, i.e., the Earth.

This paper investigates leveraging deep learning and photorealistic rendering for monocular pose estimation of known uncooperative spacecrafts. We first present a simulator built on Unreal Engine 4, named URSO, to generate labeled images of spacecrafts orbiting the Earth, which can be used to train and evaluate neural networks.

Secondly, we propose a deep learning framework for pose estimation based on orientation soft classification, which allows modelling orientation ambiguity as a mixture of Gaussians. This framework was evaluated both on URSO datasets and the ESA pose estimation challenge. In this competition, our best model achieved \nth3 place on the synthetic test set and \nth2 place on the real test set. Moreover, our results show the impact of several architectural and training aspects, and we demonstrate qualitatively how models learned on URSO datasets can perform on real images from space.

I Introduction

Spacecraft position and attitude estimation is essential to on-orbit operations [1], e.g., formation flying, rendezvous, docking, servicing and space debris removal [2]. These rely on precise and robust estimation of the relative pose and trajectory of object targets in close-proximity under harsh lighting conditions and against highly textured background (i.e. Earth). As surveyed in [3], according to the specific operation scenario, the targets may be either: (i) cooperative if they use a dedicated radio-link, fiducial markers or retro-reflectors to aid pose determination or (ii) non-cooperative with either unknown or known geometry. Recently, the latter has been gaining more interest by both the research community and space agencies due mainly to the accumulation of inactive satellites and space debris in low Earth orbit [4] but also military space operations. For instance, ESA opened a competition [5], this year, to estimate the pose of a known spacecraft from a single image using supervised learning. This papers addresses this problem.

The main limitation of deep learning (DL) is that it needs a lot of data, which is especially costly in space. Therefore, as our first contribution, we propose a visual simulator built on Unreal Engine 4, named URSO, which allows obtaining photorealistic images and depth masks of commonly used spacecrafts orbiting the Earth, as seen in Fig. 1. Secondly, we carried out an extensive experimental study of a DL-based pose estimation framework on datasets obtained from URSO, where we investigate the performance impact of several aspects of the architecture and training configuration. Among our findings, we conclude that data augmentation with random camera orientation perturbations is quite effective to combat overfitting and we present a probabilistic orientation estimation via soft classification that performs significantly better than direct orientation regression and it can further model uncertainty due to orientation ambiguity as a Gaussian mixture. Moreover, our best solution achieved \nth3 place on the synthetic dataset and \nth2 place on real dataset of ESA pose estimation challenge [5]. We also demonstrate qualitatively how models trained on URSO data can generalize to real images from space through our augmentation pipeline.

Fig. 1: Example of frames synthesized by URSO of a soyuz model. For videos and the datasets used in this work, refer to: https://pedropro.github.io/project/urso/

Ii Related Work

Previous monocular solutions [6, 7, 8, 9, 10] to spacecraft tracking and pose estimation rely on model-based approaches [11] that align a wireframe model of the object to an edge image (typically given by a Canny detector) of the real object based on heuristics. However objects are more than just a collection of edges and geometric primitives. Convolutional Neural Networks (CNNs) can learn more complex and meaningful features to the task at hand while ignoring background features (e.g. clouds) based on context.

Despite the maturity of DL in many computer vision tasks. Only recently [12, 13, 14, 15, 16, 17, 18] has DL become common in pose estimation problems. Kendall et al. [12] first proposed adapting and training GoogLeNet on Structure-from-Motion models for camera relocalization. Their network was trained to regress a quaternion by using the Euclidean distance between quaternions as a loss function. Moreover, they extended their method to model uncertainty by using Monte Carlo sampling from a network with dropout. [19]. Kehl et al. [14] proposed a DL solution for detection and pose estimation of multiple objects based on hard viewpoint classification, where ambiguous views are manually removed a-priori. On the other hand, Xiang et al. [13] proposed a model based on a segmentation network for handling multiple objects. While object locations are estimated using Hough voting on the network image output, their orientations are estimated through quaternion regression following ROI pooling. To account for object symmetries, a loss function based on ICP is used, but this is prone to local minima and it requires a depth map. Moreover, the segmentation assumes only one object instance per class. On the contrary, Thanh-Toan et al. [15] extended an instance segmentation network (i.e. Mask-RCNN) to pose estimation by simply adding a head branch, which regresses orientation as the angle-axis vector (i.e. the lie algebra of ). Although, this is a minimal parameterization that avoids the quaternion normalization layer, they still employed an Euclidean loss function. Mahendran et al. [20] also regress the angle-axis vector, but they minimize directly the geodesic loss. Su et al.[21] performed fine grained hard viewpoint classification. Hara et al. [22] compared regressing the azimuth using either the loss or the angular difference loss versus hard classification with mean-shift algorithm to retrieve a continuous value. DL has also been successfully applied to visual odometry [23, 24]. While Wang et al. [23] simply regress orientation as Euler angles, Zhou et al. [24] regress simultaneously multiple (i.e. 64) pose hypothesis with angle-axis representation and then average them since pose updates in visual odometry are usually small. There is a large body of work on pose estimation from RGB-D images, which was recently comprehensively evaluated in [25], where typically ICP is used for pose refinement. In their benchmark, Hodan et al. [25] concluded that learning-based solutions are still not on par with point-cloud-based methods [26] in terms of precision. But more recently, several recent works [16, 17, 18] have advanced state-of-the-art by refraining from estimating directly pose and instead use segmentation networks to regress the 2D projections of predefined 3D keypoints (e.g. object bounding box corners) and finally estimate pose using robust PP solutions, e.g., embedded in RANSAC. Approaches such as [16] however need further work to handle very small or far-away objects, as they rely on coarse segmentation grids.

Sharma et al. [27] were the first to propose using CNNs for spacecraft pose estimation based on hard viewpoint classification, but later they [28] proposed doing position estimation based on bounding box detection and orientation estimation based on soft classification. Although, the approach to position fails when part of object is outside the field of view, the orientation estimation has its merits. Two head branches are used for orientation estimation: one does hard classification, given a set of pre-defined quaternions, to find the closest quaternions to the actual one, then a second branch estimates the weights for these quaternions, and the final orientation is given by the weighted average quaternion. Our method for orientation estimation is similar to this approach, however we propose a more principled solution. Notably, our framework does not require two orientation branches, provides intuitive regularization parameters and can handle multiple hypothesis due to perceptual aliasing.

Fig. 2: Simplified overview of the network architecture proposed in this work.

Iii Pose Estimation Framework

Our network architecture, depicted in Fig. 2 is aimed at simplicity rather than efficiency to perform a first ablation study. We adopted the ResNet architectures with pre-trained weights as the network backbone, due to its low number of pooling layers, and good accuracy-complexity trade-off [29]. The last fully-connected layer and the global average pooling layer of the original network were removed to keep spatial feature resolution, leaving effectively only one pooling layer at the second layer. The global pooling layer was replaced by one extra 33 convolution with stride of 2 (bottleneck layer) to compress the CNN features since our task branches are fully-connected to the input tensor. For lower space complexity, one could use instead a Region Proposal Network as in [13, 15, 30], but this complicates our end-to-end pose estimation. As a drawback, our network does not handle multiple objects per se.

Our 3D location estimation is a simple regression branch with two fully-connected layers, but instead of minimizing the absolute Euclidean distance, we minimize the relative error, corresponding to the first term of our total loss function:

(1)

where and are respectively the estimated and ground-truth translation vector. The solely advantage of minimizing the relative error, is that the fine-tuned loss weights in our experiments generalize better to other datasets, as this loss does not depend on the translation scale. To avoid having to fine-tune loss weights, we have also experimented instead in Section VI regressing three virtual 3D keypoints and then estimate pose using a closed-form solution [31].

Iii-a Direct Orientation Regression

While several works [12, 15, 23] have used or loss to regress orientation. This does not represent correctly the actual angular distance for any orientation representation. Quaternions, for example, are non-injective. While one can map quaternions to lie only on one hemisphere as in [32], distances to quaternions near the equator will still not express the geodesic distance. One can instead minimize directly the geodesic distance: or even a simpler expression: . In our framework, we have experimented using both loss functions to regress a unit quaternion , subject to a normalization layer. One possible issue with the first expression is that the derivative of is infinite at , but this can be easily solved by scaling down .

Iii-B Probabilistic Orientation Soft Classification

Alternatively, we propose to do continuous orientation estimation via classification with soft assignment coding [33]. The key idea is to encode each label () as a Gaussian random variable in an orientation discrete output space (represented in Fig. 2), so that the network learns to output probability mass functions. To this end, a 3D histogram is used as the network output, where each bin maps to a combination of discrete Euler angles specified by the quantization step. Special care is taken to avoid redundant bins in the Gimbal lock and borders. Let be the quaternions corresponding to the histogram bins, then, during training, each bin is encoded with the soft assignment function:

(2)

where the kernel function uses the normalized angular difference between two quaternions:

(3)

and the variance is given by the quantization error approximation, where represents the quantization step, is the smoothing factor that controls the Gaussian width and is the number of bins per dimension (i.e. Euler angle).

At test time, given the bin activations and the respective quaternions, in one hemisphere, we can fit a quaternion by minimizing the weighted least squares:

(4)

where is assigned to and the optimal solution is given by the right null space of the matrix [34]. This solution was also employed in [28].

Iii-C Multimodal Orientation Estimation

When there are ambiguous views in the training-set, this results in one-to-many mappings, therefore the optimal network that minimizes the cross entropy losses, given the soft assignments in (2), will output a multimodal distribution. To extract multiple orientation hypothesis from such network’s output, we propose an Expectation-Maximization (EM) framework to fit a Gaussian Mixture model with means . As the Expectation step, for every model and bin we compute the membership:

(5)

where with initialized as in (3) and the priors as equiprobable. These are then updated in the maximization step:

(6)

where is firstly obtained by solving (4) with the weights: . The model means are initialized as the bins with strongest activations after non-maximum suppression. To find the optimal number of models, we increase until the log-likelihood stops increasing by more than a threshold.

Iv URSO: Unreal Rendered Spacecraft On Orbit

Our simulator leverages Unreal Engine 4 (UE4) features to render realistic images, e.g., physically based materials, bloom and lens flare. Lighting in our environment is simply made of a directional light and spotlight to simulate respectively sunlight and Earth albedo. Ambient lighting was disabled and to simulate the sun we used a body of emissive material with UE4 bloom scatter convolution. Earth was modelled as a high polygonal sphere textured with Earth and cloud images from the Blue Marble Next Generation collection [35]. This is further masked to obtain specular reflections from the ocean surface. Additionally a third party asset is used to model the atmospheric scattering. Our scene includes a Soyuz and Dragon spacecraft models with geometry imported from 3D model repositories [36].

To generate datasets, we sample randomly viewpoints around the day side of the Earth from low Earth orbit altitude. The Earth rotation, camera orientation and target object pose are all randomized. Specifically, the target object is placed randomly within the camera viewing frustum and an operating range between [10,40] m. Our interface uses UnrealCV plugin [37], which allows obtaining an RGB image and depth map for each viewpoint. Images were rendered at a resolution of 1080960 pixels by a virtual camera with a 90 horizontal FOV and auto-exposure.

V Data Augmentation and Sim-to-Real Transfer

Typical image transformations (e.g. cropping, flipping) have to be considered carefully as these may change the object nature and camera intrinsic parameters, which, in our case, is embedded in the network. One can do random in-plane rotation, since there is no concept of up and down in space, but the object may get out of bounds due to the aspect ratio, therefore this was only done for the ESA & Stanford dataset, where the satellite is always nearly centered. Additionally, we can cause small random perturbations to the camera orientation by warping the images as shown in Fig. 3. We do this during training and accordingly update the pose labels by repeating the encoding in (2). To generalize the learned models to real data, we convert the images to grayscale, change the image exposure and contrast, add AWG noise, blur the images and drop out patches as shown in Fig. 3. The motivation to use the latter is that it can help disentangling features from our mock-up that do not match the real object and it can improve robustness to occlusions and shadows.

(a) (b) (c)
(d) (e)
Fig. 3: Image augmentation and sim-to-real examples. (a) Image warped due to camera orientation perturbation, (b) and (c) Images after our sim-to-real post-processing. (d) and (e) show real images (5 seconds apart) of a soyuz with overlayed estimated pose after training with data augmentation. Notice the thrusters in action on (e).

Vi Experiments

We conducted experiments on datasets captured using URSO and the ESA & Stanford’s benchmark dataset [5], named SPEED. The latter contains both synthetic and real images with 19201200 px, generated in [28], of a mock-up model of one satellite used in a flight mission, named PRISMA [38]. The testing set contains 300 real images and 2998 synthetic images, whereas the training-set contains 12000 synthetic images and only 5 real images. All images are in grayscale. The labels of the testing set are not provided, instead the methods are evaluated by the submission server based on a subset of the testing-set. As for URSO, we collected one dataset for the dragon spacecraft and two datasets for the soyuz model with different operating ranges: soyuz_easy with [10-20] m and soyuz_hard with [10-40] m. Low ambient light was also exceptionally enabled on soyuz_easy. We have noticed that training on soyuz_easy converges faster, therefore our first experiments in this section use this dataset. All three datasets contain 5000 images, of which 10% were held out for testing and another 10% for validation. Performance is reported as the mean absolute location error, the mean angular error and also the metric used by the ESA challenge server, referred to as ESA Error, which is the sum of the mean relative location error, as in (1), and the mean angular error.

Vi-a Implementation and Training Details

Networks were trained on one NVIDIA GTX 2080 Ti, using stochastic gradient descent with a moment of 0.9, a weight decay regularization of 0.0001 and a batch size of 4 images. Training starts with weights from the backbone of Mask R-CNN trained on COCO dataset, since we use high image resolutions. The learning rate () was scheduled using step decay depending on the model convergence, which we have found to depend highly on the orientation estimation method, number of orientation bins, augmentation pipeline and the dataset. By default, unless explicitly stated, we used: ResNet-50 with a bottleneck width of 32 filters, orientation soft classification with 16 bins per Euler angle, camera rotation perturbations with maximum magnitude of 10 to augment the dataset and images were resized to half their original size. Training a model with this default configuration on soyuz_easy converges after 30 epochs with plus 5 epochs with , whereas orientation regression takes approximately half the number of iterations.

Vi-B Results

First, results from fine-tuning the parameters of our probabilistic orientation estimation based on soft classification are shown in Table I for soyuz_easy.

Angular error
Bins Train Test
3 16 6.5 55.1
6 16 5.3 8.6
9 16 8.0 10.3
6 4 11.8 20.0
6 8 8.9 11.9
6 24 3.1 7.4
TABLE I: Impact of orientation soft classification parameters. Bins is the number of bins per dimension.
Angular error
Method Train Test
Regress 6.7 13.5
Regress 6.9 13.4
Regress 9.0 20.0
Class 5.3 8.0
TABLE II: Orientation error for each method. Regress uses regression of 3D points, whereas Regress and Regress correspond to the best ratio in Fig. 4.
Fig. 4: Test errors vs ratio of loss weights. Regress and Regress regress orientation respectively using the and from Section III-A.

As one can see, which is used to scale the Gaussian tail, acts as regularizer: when it is too small, it leads to overfitting, whereas when it is too high, precision is decreased, leading to underfitting. Increasing the number of bins per dimension of the orientation discrete space, improves the precision but the number of network parameters has cubic growth. Furthermore, similarly to , it can lead to overfitting, since bins will be less often activated during training.

Fig. 4 evaluates this method against regressing orientation on soyuz_easy, for different ratios of loss weights. Interestingly, for the three alternatives, using the network only for orientation estimation by setting in (1) yields higher orientation error than performing both tasks simultaneously. The same cannot be said about the location error which grows with . Table II compares the orientation errors of train and test sets between these methods plus regressing instead three 3D keypoints. We can see that all three regression alternatives are outperformed and suffer from more overfitting on this dataset than the classification approach. It is worth noting that we have experimented using the adaptive weighting based on Laplace likelihood in [32] but achieved poor results. Moreover, optimal loss weights are subjective to the importance assigned to the specific tasks.

To demonstrate multimodal orientation estimation, we collected, via URSO, a dataset for the symmetrical marker shown in Fig. 5. As shown in this figure, after training, the network learns to output two modes representing the two possible solutions. Using naively our unimodal estimation method on this dataset results in the error distribution labeled: Top-1 errors in Fig. 5, whereas if we use the multimodal EM algorithm, proposed in Section III-C, and score the best of two hypothesis: Top-2 errors, we see that this method finds frequently the right solution.

Fig. 5: Multimodal orientation estimation experiment with a symmetrical marker, shown on the top-left. Histograms of angular errors (deg) are shown on the top-right for the testing set: Top-1 error corresponds to our single-hypothesis estimation method, whereas Top-2 error is scored as the hypothesis with smallest error from the top 2 hypothesis estimated by our EM framework. The bottom image shows on the top row the encoded label of the left frame, whereas the bottom row shows the respective network output after training.
Fig. 6: Bottleneck width and size of branch input layers vs. performance and complexity in terms of number of parameters on soyuz_easy.
Network Loc. err. Ori. err
ResNet-18 1.7 m 19.9
ResNet-34 1.4 m 20.0
ResNet-50 1.1 m 13.0
ResNet-101 1.0 m 12.2
TABLE III: Impact of architecture depth on soyuz_hard.
Resolution Loc. err. Ori. err
320240 1.6 m 24.9
640480 1.1 m 13.0
1280960 1.3 m 10.7
TABLE IV: Impact of image resolution on soyuz_hard.
Aug. Loc err. Ori err.
None 1.06 m 19.5
Rotation 0.56 m 8.0
TABLE V: Impact of applying rotation perturbations on soyuz easy
Dataset Loc err. Ori err.
SPEED 0.17 m 4.0
Soyuz hard 0.8 m 7.7
Dragon hard 0.9 m 13.9
TABLE VI: Results per dataset obtained with 24 bins per orientation dimension and 128 bottleneck filters.

Fig. 6 shows how feature compression in the bottleneck layer degrades performance and controls the network size. Similarly, for both tasks, performance changes significantly from using 8 to 128 convolutional filters. Beyond 128 features, performance gain incurs a great memory footprint. Performance does not seem to be much sensitive to the size of the first fully connected layers of our head branches.

The impact of the architecture depth is shown in Table  VI-B. ResNet with 50 layers is significantly better than its shallower counterparts, however adding more layers does not seem to improve much more the performance. Table IV shows that orientation estimation is quite sensitive to the image input resolution. The same is not clear for localization.

Fig. 7: Test-set errors distributed by object distance, for the models reported in Table VI-B.

In terms of data augmentation, as reported in Table VI-B, rotation perturbations prove to be an effective technique to augment the dataset and our sim-to-real augmentation is essential to apply models learned on URSO to real footage as shown in https://youtu.be/x8IbxmOz730, particularly to deal with the lighting changes in Fig. 3. Furthermore, as shown in Table VII, we achieved \nth2 place on the real dataset just by using our sim-to-real augmentation pipeline with the 5 real images provided.

Table VI-B compares performance between the three datasets using an increased bottleneck width and orientation output resolution. As we can see, SPEED with better lighting conditions is the easiest dataset and dragon_hard is the most challenging dataset due to viewpoint ambiguity, as shown in Fig. 8.a. We can also see this in Fig. 7.

Team Real err. Synthetic err.
UniAdelaide 0.3752 0.0095
EPFL_cvlab 0.1140 0.0215
Triple ensemble (ours) 0.1555 0.0571
Best model (ours) 0.1630 0.0604
Top 10 average 1.3848 0.1515
TABLE VII: ESA pose estimation final scores of top 3 teams. Results for were obtained for 20 of the full test set. For the complete leaderboard, refer to: https://kelvins.esa.int/satellite-pose-estimation-challenge/results/

Table VII summarizes the results of the ESA pose estimation challenge. Our best single model used a bottleneck width of 800 filters and 64 bins per orientation dimension and was trained for a total of 500 epochs, whereas our second best model using 512 bottleneck filters and 323232 orientation bins achieved respectively: 0.144 and 0.067 on the real and synthetic set. To combine the higher precision of the best model with the less likely overfitting second model we used a triple ensemble, which is an average of results (using quaternion averaging) of this last model plus two models with 646464 bins, picked at different training epochs. Our accuracy comes with a very large amount of parameters (around 500M) and it is still far from the scores of the top 2 teams, which rely on 2D keypoint regression solutions, image cropping+zooming and robust PP. As shown in Fig. 7, gross errors start appearing after 20 m, therefore we could also benefit from running the models a second time on zoomed images, since we only used half the original size.

(a) (b)
(c) (d)
(e) (f)
Fig. 8: Failure and success cases from our testing sets with predicted and groundtruth poses, and orientation weights. Predicted and labeled 2D position are shown respectively as green and red dots. Predicted and labeled orientations are shown in the polar plots as Euler angles. (a) Incorrect orientation due to an ambiguous view. Notice how the respective distribution of weights is more spread out relative to the other examples. (b) Poor orientation estimation due to poor lighting. (c) and (d) Good results under challenging background.

Vi-C Conclusion and Future Work

This paper proposed both a simulator and a DL framework for spacecraft pose estimation. Experiments with this framework reveal the impact of several network hyperparameters and training choices and attempts to answer open questions, such as, what is the best way to estimate orientation? We conclude that estimating orientation based on soft classification gives better results than direct regression and furthermore it provides the means to model uncertainty. This information is useful not only to make decisions but it can be used for filtering the pose if a temporal sequence is provided. A promising direction is to address tracking using Recurrent Neural Networks and video sequences generated using URSO. As future work, we also plan to extend URSO to other tasks: instance segmentation and SLAM, which is appropriate for targets with unknown geometry.

Nevertheless, the architecture proposed in this work is not scalable in terms of image and orientation resolution. Future work should consider how to replace the dense connections without sacrificing performance, e.g., by pruning the last layer connections. Additionally, the results reported in this work were obtained using a dedicated network for each dataset. It may be beneficial sharing the same backbone in terms of efficiency and performance.

Vi-D Acknowledgments

This work is supported by grant EP/R026092 (FAIR-SPACE Hub) through UKRI under the Industry Strategic Challenge Fund (ISCF) for Robotics and AI Hubs in Extreme and Hazardous Environments. The authors are also grateful for the feedback and discussions with Peter Blacker, Angadh Nanjangud and Zhou Hao.

References

  • [1] S. Nanjangud, P. Blacker, S. Bandyopadhyay, and Y. Gao, “Robotics and ai-enabled on-orbit operations with future generation of small satellites,” Proceedings of the IEEE, 2018.
  • [2] B. Taylor, G. Aglietti, S. Fellowes, S. Ainley, T. Salmon, I. Retat, C. Burgess, A. Hall, T. Chabot, K. Kanan, et al., “Remove debris mission, from concept to orbit,” in 32nd Annual AIAA/USU Conference on Small Satellites, 2018.
  • [3] R. Opromolla, G. Fasano, G. Rufino, and M. Grassi, “A review of cooperative and uncooperative spacecraft pose determination techniques for close-proximity operations,” Progress in Aerospace Sciences, vol. 93, pp. 53–72, 2017.
  • [4] J. L. Forshaw, G. S. Aglietti, N. Navarathinam, H. Kadhem, T. Salmon, A. Pisseloup, E. Joffre, T. Chabot, I. Retat, R. Axthelm, et al., “Removedebris: An in-orbit active debris removal demonstration mission,” Acta Astronautica, vol. 127, pp. 448–463, 2016.
  • [5] “European space agency, kelvins - esa’s advanced concepts competition website.” https://kelvins.esa.int/.
  • [6] B. Naasz, J. V. Eepoel, S. Queen, C. M. Southward, and J. Hannah, “Flight results from the hst sm4 relative navigation sensor system,” 2010.
  • [7] J. M. Kelsey, J. Byrne, M. Cosgrove, S. Seereeram, and R. K. Mehra, “Vision-based relative pose estimation for autonomous rendezvous and docking,” in IEEE Aerospace Conference, p. pp. 20, 2006.
  • [8] C. Liu and W. Hu, “Relative pose estimation for cylinder-shaped spacecrafts using single image,” IEEE Transactions on Aerospace and Electronic Systems, vol. 50, no. 4, 2014.
  • [9] A. Petit, E. Marchand, and K. Kanani, “A robust model-based tracker combining geometrical and color edge information,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3719–3724, 2013.
  • [10] A. Petit, E. Marchand, R. Sekkal, and K. Kanani, “3d object pose detection using foreground/background segmentation,” in International Conference on Robotics and Automation (ICRA), pp. 1858–1865, IEEE, 2015.
  • [11] T. Drummond and R. Cipolla, “Real-time visual tracking of complex structures,” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2002.
  • [12] A. Kendall, M. Grimes, and R. Cipolla, “Posenet: A convolutional network for real-time 6-dof camera relocalization,” in IEEE International Conference on Computer Vision (ICCV), pp. 2938–2946, 2015.
  • [13] Y. Xiang, T. Schmidt, V. Narayanan, and D. Fox, “Posecnn: A convolutional neural network for 6d object pose estimation in cluttered scenes,” 2018.
  • [14] W. Kehl, F. Manhardt, F. Tombari, S. Ilic, and N. Navab, “Ssd-6d: Making rgb-based 3d detection and 6d pose estimation great again,” in IEEE International Conference on Computer Vision, pp. 1521–1529, 2017.
  • [15] T.-T. Do, M. Cai, T. Pham, and I. Reid, “Deep-6dpose: recovering 6d object pose from a single rgb image,” arXiv preprint arXiv:1802.10367, 2018.
  • [16] Y. Hu, J. Hugonot, P. Fua, and M. Salzmann, “Segmentation-driven 6d object pose estimation,” in CVPR, 2019.
  • [17] B. Tekin, S. N. Sinha, and P. Fua, “Real-time seamless single shot 6d object pose prediction,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 292–301, 2018.
  • [18] M. Rad and V. Lepetit, “Bb8: A scalable, accurate, robust to partial occlusion method for predicting the 3d poses of challenging objects without using depth,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 3828–3836, 2017.
  • [19] A. Kendall and R. Cipolla, “Modelling uncertainty in deep learning for camera relocalization,” in International Conference on Robotics and Automation (ICRA), pp. 4762–4769, IEEE, 2016.
  • [20] S. Mahendran, H. Ali, and R. Vidal, “3d pose regression using convolutional neural networks,” in IEEE International Conference on Computer Vision (ICCV), pp. 2174–2182, 2017.
  • [21] H. Su, C. R. Qi, Y. Li, and L. J. Guibas, “Render for cnn: Viewpoint estimation in images using cnns trained with rendered 3d model views,” in IEEE International Conference on Computer Vision (ICCV), 2015.
  • [22] K. Hara, R. Vemulapalli, and R. Chellappa, “Designing deep convolutional neural networks for continuous object orientation estimation,” arXiv preprint arXiv:1702.01499, 2017.
  • [23] S. Wang, R. Clark, H. Wen, and N. Trigoni, “Deepvo: Towards end-to-end visual odometry with deep recurrent convolutional neural networks,” in International Conference on Robotics and Automation (ICRA), pp. 2043–2050, IEEE, 2017.
  • [24] H. Zhou, B. Ummenhofer, and T. Brox, “Deeptam: Deep tracking and mapping,” in The European Conference on Computer Vision (ECCV), September 2018.
  • [25] T. Hodan, F. Michel, E. Brachmann, W. Kehl, A. Glent Buch, D. Kraft, B. Drost, J. Vidal, S. Ihrke, X. Zabulis, et al., “Bop: benchmark for 6d object pose estimation,” in European Conference on Computer Vision (ECCV), pp. 19–34, 2018.
  • [26] S. Hinterstoisser, V. Lepetit, N. Rajkumar, and K. Konolige, “Going further with point pair features,” in European Conference on Computer Vision (ECCV), pp. 834–848, Springer, 2016.
  • [27] S. Sharma, C. Beierle, and S. D’Amico, “Pose estimation for non-cooperative spacecraft rendezvous using convolutional neural networks,” in IEEE Aerospace Conference, pp. 1–12, 2018.
  • [28] S. Sharma and S. D’Amico, “Pose estimation for non-cooperative rendezvous using neural networks,” in AAS/AIAA Astrodynamics Specialist Conference, 2019.
  • [29] A. Canziani, A. Paszke, and E. Culurciello, “An analysis of deep neural network models for practical applications,” arXiv preprint arXiv:1605.07678, 2016.
  • [30] K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask r-cnn,” in IEEE International Conference on Computer Vision (ICCV), pp. 2961–2969, 2017.
  • [31] K. S. Arun, T. S. Huang, and S. D. Blostein, “Least-squares fitting of two 3-d point sets,” IEEE Transactions on pattern analysis and machine intelligence, no. 5, pp. 698–700, 1987.
  • [32] A. Kendall and R. Cipolla, “Geometric loss functions for camera pose regression with deep learning,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5974–5983, 2017.
  • [33] L. Liu, L. Wang, and X. Liu, “In defense of soft-assignment coding,” in International Conference on Computer Vision (ICCV), 2011.
  • [34] F. L. Markley, Y. Cheng, J. L. Crassidis, and Y. Oshman, “Averaging quaternions,” Journal of Guidance, Control, and Dynamics, vol. 30, no. 4, pp. 1193–1197, 2007.
  • [35] “Nasa - visible earth.” https://visibleearth.nasa.gov/.
  • [36] “Turbosquid.” https://www.turbosquid.com/.
  • [37] W. Qiu, F. Zhong, Y. Zhang, S. Qiao, Z. Xiao, T. S. Kim, Y. Wang, and A. Yuille, “Unrealcv: Virtual worlds for computer vision,” ACM Multimedia Open Source Software Competition, 2017.
  • [38] S. D’Amico, J.-S. Ardaens, and R. Larsson, “Spaceborne autonomous formation-flying experiment on the prisma mission,” Journal of Guidance, Control, and Dynamics, vol. 35, no. 3, pp. 834–850, 2012.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
382218
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description