Deep Learning for Spacecraft Pose Estimation from Photorealistic Rendering
On-orbit proximity operations in space rendezvous, docking and debris removal require precise and robust 6D pose estimation under a wide range of lighting conditions and against highly textured background, i.e., the Earth.
This paper investigates leveraging deep learning and photorealistic rendering for monocular pose estimation of known uncooperative spacecrafts. We first present a simulator built on Unreal Engine 4, named URSO, to generate labeled images of spacecrafts orbiting the Earth, which can be used to train and evaluate neural networks.
Secondly, we propose a deep learning framework for pose estimation based on orientation soft classification, which allows modelling orientation ambiguity as a mixture of Gaussians. This framework was evaluated both on URSO datasets and the ESA pose estimation challenge. In this competition, our best model achieved \nth3 place on the synthetic test set and \nth2 place on the real test set. Moreover, our results show the impact of several architectural and training aspects, and we demonstrate qualitatively how models learned on URSO datasets can perform on real images from space.
Spacecraft position and attitude estimation is essential to on-orbit operations , e.g., formation flying, rendezvous, docking, servicing and space debris removal . These rely on precise and robust estimation of the relative pose and trajectory of object targets in close-proximity under harsh lighting conditions and against highly textured background (i.e. Earth). As surveyed in , according to the specific operation scenario, the targets may be either: (i) cooperative if they use a dedicated radio-link, fiducial markers or retro-reflectors to aid pose determination or (ii) non-cooperative with either unknown or known geometry. Recently, the latter has been gaining more interest by both the research community and space agencies due mainly to the accumulation of inactive satellites and space debris in low Earth orbit  but also military space operations. For instance, ESA opened a competition , this year, to estimate the pose of a known spacecraft from a single image using supervised learning. This papers addresses this problem.
The main limitation of deep learning (DL) is that it needs a lot of data, which is especially costly in space. Therefore, as our first contribution, we propose a visual simulator built on Unreal Engine 4, named URSO, which allows obtaining photorealistic images and depth masks of commonly used spacecrafts orbiting the Earth, as seen in Fig. 1. Secondly, we carried out an extensive experimental study of a DL-based pose estimation framework on datasets obtained from URSO, where we investigate the performance impact of several aspects of the architecture and training configuration. Among our findings, we conclude that data augmentation with random camera orientation perturbations is quite effective to combat overfitting and we present a probabilistic orientation estimation via soft classification that performs significantly better than direct orientation regression and it can further model uncertainty due to orientation ambiguity as a Gaussian mixture. Moreover, our best solution achieved \nth3 place on the synthetic dataset and \nth2 place on real dataset of ESA pose estimation challenge . We also demonstrate qualitatively how models trained on URSO data can generalize to real images from space through our augmentation pipeline.
Ii Related Work
Previous monocular solutions [6, 7, 8, 9, 10] to spacecraft tracking and pose estimation rely on model-based approaches  that align a wireframe model of the object to an edge image (typically given by a Canny detector) of the real object based on heuristics. However objects are more than just a collection of edges and geometric primitives. Convolutional Neural Networks (CNNs) can learn more complex and meaningful features to the task at hand while ignoring background features (e.g. clouds) based on context.
Despite the maturity of DL in many computer vision tasks. Only recently [12, 13, 14, 15, 16, 17, 18] has DL become common in pose estimation problems. Kendall et al.  first proposed adapting and training GoogLeNet on Structure-from-Motion models for camera relocalization. Their network was trained to regress a quaternion by using the Euclidean distance between quaternions as a loss function. Moreover, they extended their method to model uncertainty by using Monte Carlo sampling from a network with dropout. . Kehl et al.  proposed a DL solution for detection and pose estimation of multiple objects based on hard viewpoint classification, where ambiguous views are manually removed a-priori. On the other hand, Xiang et al.  proposed a model based on a segmentation network for handling multiple objects. While object locations are estimated using Hough voting on the network image output, their orientations are estimated through quaternion regression following ROI pooling. To account for object symmetries, a loss function based on ICP is used, but this is prone to local minima and it requires a depth map. Moreover, the segmentation assumes only one object instance per class. On the contrary, Thanh-Toan et al.  extended an instance segmentation network (i.e. Mask-RCNN) to pose estimation by simply adding a head branch, which regresses orientation as the angle-axis vector (i.e. the lie algebra of ). Although, this is a minimal parameterization that avoids the quaternion normalization layer, they still employed an Euclidean loss function. Mahendran et al.  also regress the angle-axis vector, but they minimize directly the geodesic loss. Su et al. performed fine grained hard viewpoint classification. Hara et al.  compared regressing the azimuth using either the loss or the angular difference loss versus hard classification with mean-shift algorithm to retrieve a continuous value. DL has also been successfully applied to visual odometry [23, 24]. While Wang et al.  simply regress orientation as Euler angles, Zhou et al.  regress simultaneously multiple (i.e. 64) pose hypothesis with angle-axis representation and then average them since pose updates in visual odometry are usually small. There is a large body of work on pose estimation from RGB-D images, which was recently comprehensively evaluated in , where typically ICP is used for pose refinement. In their benchmark, Hodan et al.  concluded that learning-based solutions are still not on par with point-cloud-based methods  in terms of precision. But more recently, several recent works [16, 17, 18] have advanced state-of-the-art by refraining from estimating directly pose and instead use segmentation networks to regress the 2D projections of predefined 3D keypoints (e.g. object bounding box corners) and finally estimate pose using robust PP solutions, e.g., embedded in RANSAC. Approaches such as  however need further work to handle very small or far-away objects, as they rely on coarse segmentation grids.
Sharma et al.  were the first to propose using CNNs for spacecraft pose estimation based on hard viewpoint classification, but later they  proposed doing position estimation based on bounding box detection and orientation estimation based on soft classification. Although, the approach to position fails when part of object is outside the field of view, the orientation estimation has its merits. Two head branches are used for orientation estimation: one does hard classification, given a set of pre-defined quaternions, to find the closest quaternions to the actual one, then a second branch estimates the weights for these quaternions, and the final orientation is given by the weighted average quaternion. Our method for orientation estimation is similar to this approach, however we propose a more principled solution. Notably, our framework does not require two orientation branches, provides intuitive regularization parameters and can handle multiple hypothesis due to perceptual aliasing.
Iii Pose Estimation Framework
Our network architecture, depicted in Fig. 2 is aimed at simplicity rather than efficiency to perform a first ablation study. We adopted the ResNet architectures with pre-trained weights as the network backbone, due to its low number of pooling layers, and good accuracy-complexity trade-off . The last fully-connected layer and the global average pooling layer of the original network were removed to keep spatial feature resolution, leaving effectively only one pooling layer at the second layer. The global pooling layer was replaced by one extra 33 convolution with stride of 2 (bottleneck layer) to compress the CNN features since our task branches are fully-connected to the input tensor. For lower space complexity, one could use instead a Region Proposal Network as in [13, 15, 30], but this complicates our end-to-end pose estimation. As a drawback, our network does not handle multiple objects per se.
Our 3D location estimation is a simple regression branch with two fully-connected layers, but instead of minimizing the absolute Euclidean distance, we minimize the relative error, corresponding to the first term of our total loss function:
where and are respectively the estimated and ground-truth translation vector. The solely advantage of minimizing the relative error, is that the fine-tuned loss weights in our experiments generalize better to other datasets, as this loss does not depend on the translation scale. To avoid having to fine-tune loss weights, we have also experimented instead in Section VI regressing three virtual 3D keypoints and then estimate pose using a closed-form solution .
Iii-a Direct Orientation Regression
While several works [12, 15, 23] have used or loss to regress orientation. This does not represent correctly the actual angular distance for any orientation representation. Quaternions, for example, are non-injective. While one can map quaternions to lie only on one hemisphere as in , distances to quaternions near the equator will still not express the geodesic distance. One can instead minimize directly the geodesic distance: or even a simpler expression: . In our framework, we have experimented using both loss functions to regress a unit quaternion , subject to a normalization layer. One possible issue with the first expression is that the derivative of is infinite at , but this can be easily solved by scaling down .
Iii-B Probabilistic Orientation Soft Classification
Alternatively, we propose to do continuous orientation estimation via classification with soft assignment coding . The key idea is to encode each label () as a Gaussian random variable in an orientation discrete output space (represented in Fig. 2), so that the network learns to output probability mass functions. To this end, a 3D histogram is used as the network output, where each bin maps to a combination of discrete Euler angles specified by the quantization step. Special care is taken to avoid redundant bins in the Gimbal lock and borders. Let be the quaternions corresponding to the histogram bins, then, during training, each bin is encoded with the soft assignment function:
where the kernel function uses the normalized angular difference between two quaternions:
and the variance is given by the quantization error approximation, where represents the quantization step, is the smoothing factor that controls the Gaussian width and is the number of bins per dimension (i.e. Euler angle).
At test time, given the bin activations and the respective quaternions, in one hemisphere, we can fit a quaternion by minimizing the weighted least squares:
Iii-C Multimodal Orientation Estimation
When there are ambiguous views in the training-set, this results in one-to-many mappings, therefore the optimal network that minimizes the cross entropy losses, given the soft assignments in (2), will output a multimodal distribution. To extract multiple orientation hypothesis from such network’s output, we propose an Expectation-Maximization (EM) framework to fit a Gaussian Mixture model with means . As the Expectation step, for every model and bin we compute the membership:
where with initialized as in (3) and the priors as equiprobable. These are then updated in the maximization step:
where is firstly obtained by solving (4) with the weights: . The model means are initialized as the bins with strongest activations after non-maximum suppression. To find the optimal number of models, we increase until the log-likelihood stops increasing by more than a threshold.
Iv URSO: Unreal Rendered Spacecraft On Orbit
Our simulator leverages Unreal Engine 4 (UE4) features to render realistic images, e.g., physically based materials, bloom and lens flare. Lighting in our environment is simply made of a directional light and spotlight to simulate respectively sunlight and Earth albedo. Ambient lighting was disabled and to simulate the sun we used a body of emissive material with UE4 bloom scatter convolution. Earth was modelled as a high polygonal sphere textured with Earth and cloud images from the Blue Marble Next Generation collection . This is further masked to obtain specular reflections from the ocean surface. Additionally a third party asset is used to model the atmospheric scattering. Our scene includes a Soyuz and Dragon spacecraft models with geometry imported from 3D model repositories .
To generate datasets, we sample randomly viewpoints around the day side of the Earth from low Earth orbit altitude. The Earth rotation, camera orientation and target object pose are all randomized. Specifically, the target object is placed randomly within the camera viewing frustum and an operating range between [10,40] m. Our interface uses UnrealCV plugin , which allows obtaining an RGB image and depth map for each viewpoint. Images were rendered at a resolution of 1080960 pixels by a virtual camera with a 90 horizontal FOV and auto-exposure.
V Data Augmentation and Sim-to-Real Transfer
Typical image transformations (e.g. cropping, flipping) have to be considered carefully as these may change the object nature and camera intrinsic parameters, which, in our case, is embedded in the network. One can do random in-plane rotation, since there is no concept of up and down in space, but the object may get out of bounds due to the aspect ratio, therefore this was only done for the ESA & Stanford dataset, where the satellite is always nearly centered. Additionally, we can cause small random perturbations to the camera orientation by warping the images as shown in Fig. 3. We do this during training and accordingly update the pose labels by repeating the encoding in (2). To generalize the learned models to real data, we convert the images to grayscale, change the image exposure and contrast, add AWG noise, blur the images and drop out patches as shown in Fig. 3. The motivation to use the latter is that it can help disentangling features from our mock-up that do not match the real object and it can improve robustness to occlusions and shadows.
We conducted experiments on datasets captured using URSO and the ESA & Stanford’s benchmark dataset , named SPEED. The latter contains both synthetic and real images with 19201200 px, generated in , of a mock-up model of one satellite used in a flight mission, named PRISMA . The testing set contains 300 real images and 2998 synthetic images, whereas the training-set contains 12000 synthetic images and only 5 real images. All images are in grayscale. The labels of the testing set are not provided, instead the methods are evaluated by the submission server based on a subset of the testing-set. As for URSO, we collected one dataset for the dragon spacecraft and two datasets for the soyuz model with different operating ranges: soyuz_easy with [10-20] m and soyuz_hard with [10-40] m. Low ambient light was also exceptionally enabled on soyuz_easy. We have noticed that training on soyuz_easy converges faster, therefore our first experiments in this section use this dataset. All three datasets contain 5000 images, of which 10% were held out for testing and another 10% for validation. Performance is reported as the mean absolute location error, the mean angular error and also the metric used by the ESA challenge server, referred to as ESA Error, which is the sum of the mean relative location error, as in (1), and the mean angular error.
Vi-a Implementation and Training Details
Networks were trained on one NVIDIA GTX 2080 Ti, using stochastic gradient descent with a moment of 0.9, a weight decay regularization of 0.0001 and a batch size of 4 images. Training starts with weights from the backbone of Mask R-CNN trained on COCO dataset, since we use high image resolutions. The learning rate () was scheduled using step decay depending on the model convergence, which we have found to depend highly on the orientation estimation method, number of orientation bins, augmentation pipeline and the dataset. By default, unless explicitly stated, we used: ResNet-50 with a bottleneck width of 32 filters, orientation soft classification with 16 bins per Euler angle, camera rotation perturbations with maximum magnitude of 10 to augment the dataset and images were resized to half their original size. Training a model with this default configuration on soyuz_easy converges after 30 epochs with plus 5 epochs with , whereas orientation regression takes approximately half the number of iterations.
First, results from fine-tuning the parameters of our probabilistic orientation estimation based on soft classification are shown in Table I for soyuz_easy.
As one can see, which is used to scale the Gaussian tail, acts as regularizer: when it is too small, it leads to overfitting, whereas when it is too high, precision is decreased, leading to underfitting. Increasing the number of bins per dimension of the orientation discrete space, improves the precision but the number of network parameters has cubic growth. Furthermore, similarly to , it can lead to overfitting, since bins will be less often activated during training.
Fig. 4 evaluates this method against regressing orientation on soyuz_easy, for different ratios of loss weights. Interestingly, for the three alternatives, using the network only for orientation estimation by setting in (1) yields higher orientation error than performing both tasks simultaneously. The same cannot be said about the location error which grows with . Table II compares the orientation errors of train and test sets between these methods plus regressing instead three 3D keypoints. We can see that all three regression alternatives are outperformed and suffer from more overfitting on this dataset than the classification approach. It is worth noting that we have experimented using the adaptive weighting based on Laplace likelihood in  but achieved poor results. Moreover, optimal loss weights are subjective to the importance assigned to the specific tasks.
To demonstrate multimodal orientation estimation, we collected, via URSO, a dataset for the symmetrical marker shown in Fig. 5. As shown in this figure, after training, the network learns to output two modes representing the two possible solutions. Using naively our unimodal estimation method on this dataset results in the error distribution labeled: Top-1 errors in Fig. 5, whereas if we use the multimodal EM algorithm, proposed in Section III-C, and score the best of two hypothesis: Top-2 errors, we see that this method finds frequently the right solution.
|Network||Loc. err.||Ori. err|
|Resolution||Loc. err.||Ori. err|
|Aug.||Loc err.||Ori err.|
|Dataset||Loc err.||Ori err.|
|Soyuz hard||0.8 m||7.7|
|Dragon hard||0.9 m||13.9|
Fig. 6 shows how feature compression in the bottleneck layer degrades performance and controls the network size. Similarly, for both tasks, performance changes significantly from using 8 to 128 convolutional filters. Beyond 128 features, performance gain incurs a great memory footprint. Performance does not seem to be much sensitive to the size of the first fully connected layers of our head branches.
The impact of the architecture depth is shown in Table VI-B. ResNet with 50 layers is significantly better than its shallower counterparts, however adding more layers does not seem to improve much more the performance. Table IV shows that orientation estimation is quite sensitive to the image input resolution. The same is not clear for localization.
In terms of data augmentation, as reported in Table VI-B, rotation perturbations prove to be an effective technique to augment the dataset and our sim-to-real augmentation is essential to apply models learned on URSO to real footage as shown in https://youtu.be/x8IbxmOz730, particularly to deal with the lighting changes in Fig. 3. Furthermore, as shown in Table VII, we achieved \nth2 place on the real dataset just by using our sim-to-real augmentation pipeline with the 5 real images provided.
Table VI-B compares performance between the three datasets using an increased bottleneck width and orientation output resolution. As we can see, SPEED with better lighting conditions is the easiest dataset and dragon_hard is the most challenging dataset due to viewpoint ambiguity, as shown in Fig. 8.a. We can also see this in Fig. 7.
|Team||Real err.||Synthetic err.|
|Triple ensemble (ours)||0.1555||0.0571|
|Best model (ours)||0.1630||0.0604|
|Top 10 average||1.3848||0.1515|
Table VII summarizes the results of the ESA pose estimation challenge. Our best single model used a bottleneck width of 800 filters and 64 bins per orientation dimension and was trained for a total of 500 epochs, whereas our second best model using 512 bottleneck filters and 323232 orientation bins achieved respectively: 0.144 and 0.067 on the real and synthetic set. To combine the higher precision of the best model with the less likely overfitting second model we used a triple ensemble, which is an average of results (using quaternion averaging) of this last model plus two models with 646464 bins, picked at different training epochs. Our accuracy comes with a very large amount of parameters (around 500M) and it is still far from the scores of the top 2 teams, which rely on 2D keypoint regression solutions, image cropping+zooming and robust PP. As shown in Fig. 7, gross errors start appearing after 20 m, therefore we could also benefit from running the models a second time on zoomed images, since we only used half the original size.
Vi-C Conclusion and Future Work
This paper proposed both a simulator and a DL framework for spacecraft pose estimation. Experiments with this framework reveal the impact of several network hyperparameters and training choices and attempts to answer open questions, such as, what is the best way to estimate orientation? We conclude that estimating orientation based on soft classification gives better results than direct regression and furthermore it provides the means to model uncertainty. This information is useful not only to make decisions but it can be used for filtering the pose if a temporal sequence is provided. A promising direction is to address tracking using Recurrent Neural Networks and video sequences generated using URSO. As future work, we also plan to extend URSO to other tasks: instance segmentation and SLAM, which is appropriate for targets with unknown geometry.
Nevertheless, the architecture proposed in this work is not scalable in terms of image and orientation resolution. Future work should consider how to replace the dense connections without sacrificing performance, e.g., by pruning the last layer connections. Additionally, the results reported in this work were obtained using a dedicated network for each dataset. It may be beneficial sharing the same backbone in terms of efficiency and performance.
This work is supported by grant EP/R026092 (FAIR-SPACE Hub) through UKRI under the Industry Strategic Challenge Fund (ISCF) for Robotics and AI Hubs in Extreme and Hazardous Environments. The authors are also grateful for the feedback and discussions with Peter Blacker, Angadh Nanjangud and Zhou Hao.
-  S. Nanjangud, P. Blacker, S. Bandyopadhyay, and Y. Gao, “Robotics and ai-enabled on-orbit operations with future generation of small satellites,” Proceedings of the IEEE, 2018.
-  B. Taylor, G. Aglietti, S. Fellowes, S. Ainley, T. Salmon, I. Retat, C. Burgess, A. Hall, T. Chabot, K. Kanan, et al., “Remove debris mission, from concept to orbit,” in 32nd Annual AIAA/USU Conference on Small Satellites, 2018.
-  R. Opromolla, G. Fasano, G. Rufino, and M. Grassi, “A review of cooperative and uncooperative spacecraft pose determination techniques for close-proximity operations,” Progress in Aerospace Sciences, vol. 93, pp. 53–72, 2017.
-  J. L. Forshaw, G. S. Aglietti, N. Navarathinam, H. Kadhem, T. Salmon, A. Pisseloup, E. Joffre, T. Chabot, I. Retat, R. Axthelm, et al., “Removedebris: An in-orbit active debris removal demonstration mission,” Acta Astronautica, vol. 127, pp. 448–463, 2016.
-  “European space agency, kelvins - esa’s advanced concepts competition website.” https://kelvins.esa.int/.
-  B. Naasz, J. V. Eepoel, S. Queen, C. M. Southward, and J. Hannah, “Flight results from the hst sm4 relative navigation sensor system,” 2010.
-  J. M. Kelsey, J. Byrne, M. Cosgrove, S. Seereeram, and R. K. Mehra, “Vision-based relative pose estimation for autonomous rendezvous and docking,” in IEEE Aerospace Conference, p. pp. 20, 2006.
-  C. Liu and W. Hu, “Relative pose estimation for cylinder-shaped spacecrafts using single image,” IEEE Transactions on Aerospace and Electronic Systems, vol. 50, no. 4, 2014.
-  A. Petit, E. Marchand, and K. Kanani, “A robust model-based tracker combining geometrical and color edge information,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3719–3724, 2013.
-  A. Petit, E. Marchand, R. Sekkal, and K. Kanani, “3d object pose detection using foreground/background segmentation,” in International Conference on Robotics and Automation (ICRA), pp. 1858–1865, IEEE, 2015.
-  T. Drummond and R. Cipolla, “Real-time visual tracking of complex structures,” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2002.
-  A. Kendall, M. Grimes, and R. Cipolla, “Posenet: A convolutional network for real-time 6-dof camera relocalization,” in IEEE International Conference on Computer Vision (ICCV), pp. 2938–2946, 2015.
-  Y. Xiang, T. Schmidt, V. Narayanan, and D. Fox, “Posecnn: A convolutional neural network for 6d object pose estimation in cluttered scenes,” 2018.
-  W. Kehl, F. Manhardt, F. Tombari, S. Ilic, and N. Navab, “Ssd-6d: Making rgb-based 3d detection and 6d pose estimation great again,” in IEEE International Conference on Computer Vision, pp. 1521–1529, 2017.
-  T.-T. Do, M. Cai, T. Pham, and I. Reid, “Deep-6dpose: recovering 6d object pose from a single rgb image,” arXiv preprint arXiv:1802.10367, 2018.
-  Y. Hu, J. Hugonot, P. Fua, and M. Salzmann, “Segmentation-driven 6d object pose estimation,” in CVPR, 2019.
-  B. Tekin, S. N. Sinha, and P. Fua, “Real-time seamless single shot 6d object pose prediction,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 292–301, 2018.
-  M. Rad and V. Lepetit, “Bb8: A scalable, accurate, robust to partial occlusion method for predicting the 3d poses of challenging objects without using depth,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 3828–3836, 2017.
-  A. Kendall and R. Cipolla, “Modelling uncertainty in deep learning for camera relocalization,” in International Conference on Robotics and Automation (ICRA), pp. 4762–4769, IEEE, 2016.
-  S. Mahendran, H. Ali, and R. Vidal, “3d pose regression using convolutional neural networks,” in IEEE International Conference on Computer Vision (ICCV), pp. 2174–2182, 2017.
-  H. Su, C. R. Qi, Y. Li, and L. J. Guibas, “Render for cnn: Viewpoint estimation in images using cnns trained with rendered 3d model views,” in IEEE International Conference on Computer Vision (ICCV), 2015.
-  K. Hara, R. Vemulapalli, and R. Chellappa, “Designing deep convolutional neural networks for continuous object orientation estimation,” arXiv preprint arXiv:1702.01499, 2017.
-  S. Wang, R. Clark, H. Wen, and N. Trigoni, “Deepvo: Towards end-to-end visual odometry with deep recurrent convolutional neural networks,” in International Conference on Robotics and Automation (ICRA), pp. 2043–2050, IEEE, 2017.
-  H. Zhou, B. Ummenhofer, and T. Brox, “Deeptam: Deep tracking and mapping,” in The European Conference on Computer Vision (ECCV), September 2018.
-  T. Hodan, F. Michel, E. Brachmann, W. Kehl, A. Glent Buch, D. Kraft, B. Drost, J. Vidal, S. Ihrke, X. Zabulis, et al., “Bop: benchmark for 6d object pose estimation,” in European Conference on Computer Vision (ECCV), pp. 19–34, 2018.
-  S. Hinterstoisser, V. Lepetit, N. Rajkumar, and K. Konolige, “Going further with point pair features,” in European Conference on Computer Vision (ECCV), pp. 834–848, Springer, 2016.
-  S. Sharma, C. Beierle, and S. D’Amico, “Pose estimation for non-cooperative spacecraft rendezvous using convolutional neural networks,” in IEEE Aerospace Conference, pp. 1–12, 2018.
-  S. Sharma and S. D’Amico, “Pose estimation for non-cooperative rendezvous using neural networks,” in AAS/AIAA Astrodynamics Specialist Conference, 2019.
-  A. Canziani, A. Paszke, and E. Culurciello, “An analysis of deep neural network models for practical applications,” arXiv preprint arXiv:1605.07678, 2016.
-  K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask r-cnn,” in IEEE International Conference on Computer Vision (ICCV), pp. 2961–2969, 2017.
-  K. S. Arun, T. S. Huang, and S. D. Blostein, “Least-squares fitting of two 3-d point sets,” IEEE Transactions on pattern analysis and machine intelligence, no. 5, pp. 698–700, 1987.
-  A. Kendall and R. Cipolla, “Geometric loss functions for camera pose regression with deep learning,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5974–5983, 2017.
-  L. Liu, L. Wang, and X. Liu, “In defense of soft-assignment coding,” in International Conference on Computer Vision (ICCV), 2011.
-  F. L. Markley, Y. Cheng, J. L. Crassidis, and Y. Oshman, “Averaging quaternions,” Journal of Guidance, Control, and Dynamics, vol. 30, no. 4, pp. 1193–1197, 2007.
-  “Nasa - visible earth.” https://visibleearth.nasa.gov/.
-  “Turbosquid.” https://www.turbosquid.com/.
-  W. Qiu, F. Zhong, Y. Zhang, S. Qiao, Z. Xiao, T. S. Kim, Y. Wang, and A. Yuille, “Unrealcv: Virtual worlds for computer vision,” ACM Multimedia Open Source Software Competition, 2017.
-  S. D’Amico, J.-S. Ardaens, and R. Larsson, “Spaceborne autonomous formation-flying experiment on the prisma mission,” Journal of Guidance, Control, and Dynamics, vol. 35, no. 3, pp. 834–850, 2012.