What Face and Body Shapes Can Tell      About Height

      What Face and Body Shapes Can Tell
      About Height


Recovering a person’s height from a single image is important for virtual garment fitting, autonomous driving and surveillance, however, it is also very challenging due to the absence of absolute scale information. We tackle the rarely addressed case, where camera parameters and scene geometry is unknown. To nevertheless resolve the inherent scale ambiguity, we infer height from statistics that are intrinsic to human anatomy and can be estimated from images directly, such as articulated pose, bone-length proportions, and facial features. Our contribution is twofold. First, we experiment with different machine learning models to capture the relation between image content and human height. Second, we show that performance is predominantly limited by dataset size and create a new dataset that is three magnitudes larger, by mining explicit height labels and propagating them to additional images through face recognition and assignment consistency. Our evaluation shows that monocular height estimation is possible with a MAE of 5.56cm.


Semih Günelsemih.gunel@epfl.ch1 \addauthor      Helge Rhodinhelge.rhodin@epfl.ch1 \addauthor      Pascal Fuapascal.fua@epfl.ch1 \addinstitution CVLab
Lausanne, Switzerland What Face and Body Shapes Can Tell About Height

1 Introduction

Estimating people’s height from a single image is needed in areas such as subject identification for surveillance purposes, pedestrian distance estimation for autonomous driving, and automated garment fitting in online stores. However, since people’s apparent height is affected by camera distance and focal length, assessing someone’s real height only from the image is difficult.

Existing algorithms can only operate under very specific conditions. For example, comparisons to the average height have been used in [Dey et al.(2014)Dey, Nangia, Ross, Keith, and Liu] to infer people’s sizes from group pictures. Similarly, height inference in images acquired with calibrated cameras and in which the location of the ground plane is given [Guan(2009), Zhou et al.(2016)Zhou, Jiang, Zhang, Zhang, and Wang, J. Vester(), Li et al.(2011)Li, Nguyen, Ma, Jin, Do, and Kim] or where objects of known height are visible [J. Vester(), Ljungberg and Sönnerstam()] have been demonstrated. The method of [BenAbdelkader and Yacoob(2008)] is the only one we know of that can operate on single uncalibrated RGB images and without prior knowledge. However, it relies on manually supplied keypoints and its reported results on real images lack precision.

In this paper, we propose a more generic and automated approach that relies on extensive evidence from the biometrics literature that height is correlated to the relative sizes of body parts [Adjeroh et al.(2010)Adjeroh, Cao, Piccirilli, and Ross, Zaslan et al.()Zaslan, Yaar, Can, Nci Zaslan, Tugcu, and Koç, Shiang(1999), Kato and Higashiyama(1998), Re et al.(2013)Re, Debruine, Jones, and Perrett, Mather(2010), Burton and Rule(2013), Wilson et al.(2010)Wilson, Herrmann, and Jantz, Albanese et al.(2016)Albanese, Tuck, Gomes, and Cardoso, Duyar and Pelin(2003)], such as the ratio of the tibia length to the whole body or the head to shoulders ratio, which are scale invariant. To this end, we train Deep Nets to capture the correlation between the relative size of body parts without having to explicitly measure them. We demonstrate that they can be trained end-to-end and yield estimates whose mean absolute error is 5.56cm, which is better than the state-of-the-art [BenAbdelkader and Yacoob(2008)].

Our contribution is empirical in nature with fundamental implications for the theoretical design of height and pose estimation approaches. First, we have developed a practical approach to mining a large training dataset via label propagation. Second, we show that the specific machine learning algorithm being used matters, but only when using training datasets that are several orders of magnitude larger than those with only handful of objects, that have been used so far [Ionescu et al.(2014)Ionescu, Papava, Olaru, and Sminchisescu, Sigal et al.(2010)Sigal, Balan, and Black, Mehta et al.(2017a)Mehta, Rhodin, Casas, Fua, Sotnychenko, Xu, and Theobalt]. Finally, our experiments show how important it is to account for small-scale facial details in addition to large-scale full-body information and that these are best captured by networks with scale-specific streams. These findings suggest that future work should focus on both small and large-scale features and must use training datasets that are several magnitudes larger than the ones used currently.

2 Related Work

There are several algorithms that can infer age [Rothe et al.()Rothe, Timofte, and Gool, Malli et al.(2016)Malli, Aygun, and Ekenel, Dong et al.(2016)Dong, Liu, and Lian, Wang et al.(2015)Wang, Guo, and Kambhamettu] or emotional state [Dagar et al.(2016)Dagar, Hudait, Tripathy, and Das, Wegrzyn et al.(2017)Wegrzyn, Vogt, Kireclioglu, Schneider, and Kissler] from single images with high reliability, often exceeding that of humans, in part because these are not affected noticeably by scale ambiguities. By contrast, there are far fewer approaches to estimating human size and we review them briefly here.

Geometric height estimation.

The height of standing people can be estimated geometrically from a single image under some fairly mild assumptions. This can be done by finding head position and foot contact through triangulation when the camera height and orientation in relation to the ground plane is known [Guan(2009), Zhou et al.(2016)Zhou, Jiang, Zhang, Zhang, and Wang, Li et al.(2011)Li, Nguyen, Ma, Jin, Do, and Kim], computing the vanishing point of the ground plane and the height of a reference object in the scene [J. Vester(), Hartley and Zisserman(2000)], or accounting for the height of multiple nearby reference objects [J. Vester(), Ljungberg and Sönnerstam()]. However, the necessary knowledge about camera pose, ground plane, and feet contact points is often unavailable.

Height from camera geometry.

Without external scale information, object size is ambiguous according to the basic pinhole camera model. In practice, lenses have a limited depth of field, which shape-from-defocus techniques exploit [Mather(1996), Shi et al.(2015)Shi, Chen, Wang, Yeung, Wong, and Woo] to estimate distance. It can be used to guess depth orderings in a single image. However, a focal sweep across multiple images or a specialized camera [Georgiev et al.(2013)Georgiev, , Yu, Lumsdaine, and Goma] is required for metric scale reconstruction.

Height from image features.

In [Dey et al.(2014)Dey, Nangia, Ross, Keith, and Liu], face position and size are used to measure relative heights in pixels first in group pictures and then in image collections featuring groups. Absolute height is estimated from the network of relative heights by enforcing consistency with the average human height, which is effective but only for group photos. Closest to our approach is the data-driven one of [BenAbdelkader and Yacoob(2008)], which uses a linear-regressor to predict height from keypoint locations in the input image. The results of an anthropometric survey [Gordon et al.(1989)Gordon, Churchill, Clauser, Bradtmiller, and McConville] are used to train the regressor. However, even though the keypoints are supplied manually, the results on real images barely exceed what can be done by predicting an average height for all subjects. By contrast, our DeepNet regressor is non-linear, can learn a much more complex mapping that accounts the uncertainty of image-feature extraction, does not require manual annotation of keypoints, and yields better results. Its network architecture is inspired by deep networks used for 3D human pose prediction [Pavlakos et al.(2017a)Pavlakos, Zhou, Derpanis, Konstantinos, and Daniilidis, Tome et al.(2017)Tome, Russell, and Agapito, Popa et al.(2017)Popa, Zanfir, and Sminchisescu, Martinez et al.(2017)Martinez, Hossain, Romero, and Little, Mehta et al.(2017b)Mehta, Sridhar, Sotnychenko, Rhodin, Shafiei, Seidel, Xu, Casas, and Theobalt, Rogez et al.(2017)Rogez, Weinzaepfel, and Schmid, Pavlakos et al.(2017b)Pavlakos, Zhou, Konstantinos, and Kostas, Tomè et al.(2017)Tomè, Russell, and Agapito, Tekin et al.(2017)Tekin, Márquez-neila, Salzmann, and Fua]. However, we will show that training on the existing 3D pose datasets with a handful of subjects is insufficient, which was our incentive for creating a larger one.

Height from body measurements.

Medical studies suggest that the height of an individual can be approximated given ratios of limb proportions [Adjeroh et al.(2010)Adjeroh, Cao, Piccirilli, and Ross], absolute tibia length [Duyar and Pelin(2003)], foot length [Zaslan et al.()Zaslan, Yaar, Can, Nci Zaslan, Tugcu, and Koç], and the ratio of head to shoulders [Shiang(1999), Kato and Higashiyama(1998)]. Also human perception of height seems influenced by head to shoulders ratio, which suggests a real link between head to shoulders ratio to actual height [Re et al.(2013)Re, Debruine, Jones, and Perrett, Mather(2010), Burton and Rule(2013)]. There is also a body of anthropological research about inferring the living height of the individual from the length of several bones in their skeletons, which indicates that height can be approximated given the size of some body parts [Wilson et al.(2010)Wilson, Herrmann, and Jantz, Albanese et al.(2016)Albanese, Tuck, Gomes, and Cardoso, Duyar and Pelin(2003)].

While these studies indicate that height estimation should be possible from facial and full-body measurement, there is no easy way to obtain them from single uncalibrated images and it is not known how naturally occurring feature extraction error influences accuracy. In particular, the often mentioned absolute length measurements cannot be inferred directly from 2D images.

3 Method

Our goal is to estimate human height, , from a single RGB image, , without prior knowledge of camera geometry, viewpoint position, or ground plane location. This setting rules out any direct measurement and requires statistical analysis of body proportions and appearance from the images only. We therefore follow a data-driven approach and infer the relationship between image content and human height through machine learning.

To make the method independent of scene-specific content, we first localize people in the image and then learn a mapping from image crops that tightly contain the target subject. To this end, we first introduce a diverse dataset of cropped image-height pairs, , with examples (Sec. 3.1). Then, we explore different image features and neural network architectures to infer parameters of that robustly predict height given a new input (Sec. 3.2).

3.1 Dataset Mining

Existing 3D pose datasets are limited to a handful of subjects [Ionescu et al.(2014)Ionescu, Papava, Olaru, and Sminchisescu, Sigal et al.(2010)Sigal, Balan, and Black, Mehta et al.(2017a)Mehta, Rhodin, Casas, Fua, Sotnychenko, Xu, and Theobalt] (Human3.6Million, HumanEva and MPII-INF-3DHP) and datasets build from web content and comprising anonymous individuals [Andriluka et al.(2014)Andriluka, Pishchulin, Gehler, and Schiele, Charles et al.(2016)Charles, Pfister, Magee, Hogg, and Zisserman, Lin et al.(2014)Lin, Maire, Belongie, Hays, Perona, Ramanan, Dollár, and Zitnick] (MPII-2D, BBC-Pose, COCO) do not include height information. We therefore built our own. We started from a medium-sized one containing people of known height, which we then enlarged using face-based re-identification and pruned by enforcing label consistency and filtering on 2D pose estimates. Fig. 1 depicts the result.

Figure 1: Examples from IMDB-100K. Profile images have been matched to additional images. Thereby, height labels on portrait images are propagated to all the assigned images.


We used the IMDB website as our starting point. As of February 2018, it covers 8.7 million show-business personalities.111https://www.imdb.com/pressroom/stats/ To find heights of people and corresponding images, we crawled the most popular 100.000 actors.222http://www.imdb.com/search/name?gender=male,female We found 12,104 individuals with both height information and a profile image involving a single face.


IMDB also has more than a million images taken at award ceremonies and stills from movies, including full-body images of our 12,104 individuals. Although there are associated labels specifying the actors present in the image, these labels do not specify location of the person in the image, which makes the association of height labels to a single person in image potentially ambiguous, especially if there are several people present.

Formally, let be an image that should be labeled and be the subject labels given by IMDB. We run a face detection algorithm [King(2009)], which returns a set of detected individuals and for each the head location in terms of a bounding box and a feature vector that describes the appearance compactly.

When there is only one person in the image, we can directly attribute the associated height information to the detected subject. This has enabled us to create a first annotated dataset of 23,024 examples, which we will refer to as IMDB-23K.

Figure 2: Identity matching. Samples from the processed IMDB-100K dataset, with an overlay of the assigned subjects’ 2D pose, head detection, identity and height annotation. In favor of reliable assignments opposed to false assignments, some persons remain unassigned.

The same strategy was used to create the IMDB-WIKI dataset to learn age from facial crops [Rothe et al.()Rothe, Timofte, and Gool]. However, we also need the rest of the body and want to create a richer database by also using images with several people and multiple detections. To associate labels and detections in such cases, we compute from the profile image of each subject a facial descriptor and store its euclidean distance from comparable descriptors for all detections in image . This yields a distance matrix between all the and descriptors. To match one to a specific , we make sure that is smaller than all other distances in row and column , that is is the closest match to all in the list of subjects and, similarly, is the closest match to all in the list of detections. In practice, we apply an additional ratio test to ensure the assignment is reliable: We assign to the best matching feature vector , but only if the quotient is smaller than . That means best match must be significantly better than the second-best match to be accepted. This produces a much larger set of 274,964 image-person pairs with known height, which we will refer to as IMDB-275K. We show a few examples in Fig. 2.

To estimate the accuracy of our assingments in IMDB-275Kwe randomly select 120 images which include multiple faces, where assigning identity to faces is non-trivial and possibly erroneous. Out of the 331 IMDB labels in 120 images, we assigned 237 labels to faces inside the images, where only 5 of the assignments were wrong. We also repeated the same experiment for IMDB-23K, where assignment is much easier. We again selected 120 random images and check the accuracy of the assignments. We observed only a single mismatch. Overall this corresponds to an estimated label precision of 98.0% and recall of 70.1%.

Filtering and preprocessing.

As discussed in the Related Work section, previous studies suggest that full-body pose and bone-length relations contain scale information. Therefore, we run a multi-person 2D pose estimation algorithm [Cao et al.(2017)Cao, Simon, Wei, and Sheikh] on each dataset image and assign the detected joints to the subject whose estimated head location as predicted by the face detector [King(2009)] is closest to the one estimated by the 2D pose extraction algorithm. The 2D joints are then used to compute image crops that tightly enclose the body and head. The face is similarly cropped to , as shown in the left side of Fig. 3.

Finally, we automatically exclude from IMDB-275K images missing upper body joints or whose crop is less than 32 pixels tall. This leaves us with 101,664 examples, which we will refer to as the IMDB-100K dataset. We also applied this process to IMDB-23K. In both cases, we store for each person the annotated height , a face crop , a facial feature vector , a pose crop , a set of 2D joint locations , and the gender if available.

Splitting the dataset.

We split IMDB-100K into three sets, roughly in size 80k, 15k and 5k images for training, testing and validaton, respectively.

3.2 Height Regression

Since there is very little prior work on estimating human height directly from image features, it is unclear which features are the most effective. We therefore tested a wide range of them. To the face and body crops, and , discussed in Section 3.1, which we padded to be , we added the corresponding 2D body poses, in the form of 2D locations of keypoints centered around their mean and whitened, along with 4096-dimensional facial features computed from the last hidden layer of the VGG-16-based face recognition network of [Wu et al.()Wu, He, Sun, and Tan].

Figure 3: Deep two-stream architecture. Humans are automatically detected and cropped in the preprocessing state. We experiment with DeepNet, a two-scale deep convolutional network that is trained end-to-end, and a simple ShallowNet, that operates on generic image features, and with Linear, which is a simple Linear Regression.

Given all these features, we tested the three different approaches to regression depicted by Fig. 3:

  • Linear. Linear regression from the pre-computed 2D pose and facial features vectors, as in [BenAbdelkader and Yacoob(2008)].

  • ShallowNet. Regression using a 4-layer fully connected network as used in [Martinez et al.(2017)Martinez, Hossain, Romero, and Little]. ShallowNet operates on the same features as Linear.

  • DeepNet. Regression using a deeper and more complex network to combine fine-grained facial features with large-scale information about overall body pose and shape. It uses two separate channels to compute face and full body features directly from the body and face crops, respectively, and uses two fully connected layers to fuse the results, as depicted in Fig. 3. By contrast to ShallowNet, we train this network end-to-end and thereby optimize the facial and full-body feature extraction networks for the task of human height estimation using MSE Loss. To allow for a fair comparison, we use the same VGG architecture in the face stream[Wu et al.()Wu, He, Sun, and Tan]. For the full body one, we utilize a ResNet [He et al.(2016)He, Zhang, Ren, and Sun].

4 Evaluation

We now quantify the improvement brought about by our estimation and try to tease out the influence of its individual components, dataset mining, and network design. We also show some example results on  Fig.4 for the most popular actors from the test split of IMDB-100K.


We report height estimation accuracy in terms of the mean absolute error (MAE) compared to the annotated height in cm. We also supply cumulative error histograms.

166/170 181/179 181/181 177/173 164/173 166/164 182/173 163/156 167/168
Figure 4: Qualitative evaluation. Results on test set of IMDB-test shown as prediction/ground-truth in centimeters.

Independent test set. To demonstrate that our training dataset is generic and that our models generalize well, we created Lab-test, an in-house dataset containing photos of various subjects whose height is known precisely. Since it was acquired completely independently from IMDB, we can be sure that our results are not contaminated by overfitting to specific poses, appearance, illumination, or angle consistency. Lab-test  depicts 14 different individuals with 10 photos each. Each one contains a full body shot in different settings, sometimes with small occlusions to reflect the complexities of the real world. The subjects are walking in different directions, standing, or sitting. Individuals span diverse ethnicities from several European and Asian countries and heights, ranging from 1.57 to 1.93 in meters.

Baselines. We compare against the following baselines in order of increasing sophistication:

  • ConstantMean. The simplest we can do, which is to directly predict the average height of of IMDB-100K, which is 170.1 centimeters.

  • GenderMean. Since men are taller than women on average, gender is a predictor of height. We use the ground-truth annotation as an oracle and the gender-specific mean height as the prediction.

  • GenderPred. Instead of using an oracle, we train a network whose architecture is similar to DeepNet to predict gender instead of height and again use the gender-specific mean height as the prediction.

  • PoseNet. We re-implemented the method of [Mehta et al.(2017a)Mehta, Rhodin, Casas, Fua, Sotnychenko, Xu, and Theobalt] that predicts 3D human pose in absolute metric coordinates after training on the Human3.6M dataset [Ionescu et al.(2014)Ionescu, Papava, Olaru, and Sminchisescu]. Height information is extracted from the predicted bone lengths from head to ankle. To accommodate for the distance from ankle to the ground, we find a constant offset between the predicted height and the ground truth height on IMDB-100K.

4.1 Comparative Results

IMDB-100K Lab-test
Method all women men all
ConstantMean 8.25 7.46 9.22 11.0
GenderPred 6.61 6.28 7.12 9.26
PoseNet [Mehta et al.(2017a)Mehta, Rhodin, Casas, Fua, Sotnychenko, Xu, and Theobalt] - - - 10.65
DeepNet (ours) 6.14 5.88 6.40 9.13
GenderMean 5.91 5.63 6.23 8.66
DeepNet (gender-specific) 5.56 5.23 6.03 8.53
Regression type
Input features Linear ShallowNet DeepNet
Body crop only 7.56 / 11.10 7.10 / 10.40 6.40 / 9.43
Face crop only 6.49 / 10.25 6.31 / 9.99 6.25 / 8.87
Body and Face 6.40 / 10.2 6.29 / 9.92 6.06 / 9.13
(a) (b)
Table 1: Mean Absolute Error (MAE) on IMDB-100K and Lab-test. (a) Comparison against our baselines. (b) Ablation study, accuracies are given in IMDB-100K / Lab-test format.

We report our mean accuracies on IMDB-100Kand Lab-test along with those of the baselines at the top Tab. 1(a). DeepNet, which is our complete approach, outperforms them on both, with GenderPred being a close second. This confirms that gender is indeed a strong height predictor. To confirm this, we retrained DeepNet for men and women separately and compare its accuracy to that of GenderMean at the bottom of the table. As can be seen, our full approach improves upon this as well but, somewhat unexpectedly, more for women than men.

PoseNet does not do particularly well, presumably because it has been trained on many images but all from a small number of subjects. It has therefore not learned the vast variety of possible body shapes.

In Table 1(b), we report the results of an ablation study in which we ran the three versions of our algorithm—Linear, ShallowNet, and DeepNet introduced in Section 3.2—on the full dataset, on the faces only, or on the body only. In all cases, DeepNet does better than the others, which indicates that it also outperforms the state-of-the-art algorithm [BenAbdelkader and Yacoob(2008)], which Linear emulates.

On IMDB-100K, using both body and faces helps but, surprisingly, not on Lab-test where using the faces only is the best. We suspect that the poses in Lab-test are more varied than in IMDB-100K. Furthermore, there is also wider spread of heights in Lab-test and other biases due to its small size, which might contribute to this unexpected behavior. This is something we will have to investigate further. Aside from this, the conclusions drawn from experiments on IMDB-100K are all confirmed on Lab-test.

Figure 5: Accuracy as a function of the training dataset size. We plot separate curves for men and women.

In Fig. 5, we plot the accuracy of our model as a function of the size of the training set. It clearly takes more than 10,000 to 20,000 images to outperform GenderMean. Interestingly, it seems to take more images for men than women, possibly due to the larger variance in men height. This indicates the results we report here for men might not be optimal yet and would benefit from using an even larger training set. This finding could also explain the lower accuracy on Lab-test, which is dominated by men, compared to IMDB-100K.

5 Discussion and Limitations

Although we collected a large set of images and actors with height annotations, our dataset comes with caveats: Height information given in IMDB might be imprecise or even speculative and there is no practical way to assess the quality of the annotations. Furthermore, human height can change with age and our dataset only provides a single number, even if someone’s images span multiple decades. Therefore we assume the provided height information is descriptive rather than precise. One future direction of research can be eliminating such inaccuracies, possibly by making annotation consistent in group pictures [Dey et al.(2014)Dey, Nangia, Ross, Keith, and Liu]. Furthermore, if some annotations can be identified as unreliable, this could be modeled by incorporating a confidence value during training and prediction.

6 Conclusion

With 274,964 images and 12,104 actors, we introduce the largest dataset for height estimation to date. The provided label association could be used not only for height estimation but also to explore other properties of human appearance and shape in the future. We experimented with different network architectures that, when trained on IMDB-100K, improves significantly on existing height estimation solutions.

Our findings have several implications on future work. Since height prediction from a single image remains inaccurate, methods for 3D human pose predictions should not be evaluated in metric space, as is often done, but after scale normalization, and the inevitable inaccuracies in height estimation evaluated separately. Furthermore, if absolute height is desired, a large dataset must be used to cover the large variations in human shape, pose and appearance. Finally, it is important to combine both facial and full-body information for height regression.


  • [Adjeroh et al.(2010)Adjeroh, Cao, Piccirilli, and Ross] D. Adjeroh, D. Cao, M. Piccirilli, and A. Ross. Predictability and correlation in human metrology. In International Workshop on Information Forensics and Security, pages 1–6. IEEE, 2010. ISBN 978-1-4244-9078-3. doi: 10.1109/WIFS.2010.5711470.
  • [Albanese et al.(2016)Albanese, Tuck, Gomes, and Cardoso] J. Albanese, A. Tuck, J. Gomes, and H.F.V. Cardoso. An alternative approach for estimating stature from long bones that is not population- or group-specific. Forensic Science International, 259:59–68, 2016. ISSN 03790738. doi: 10.1016/j.forsciint.2015.12.011.
  • [Andriluka et al.(2014)Andriluka, Pishchulin, Gehler, and Schiele] M. Andriluka, L. Pishchulin, P. Gehler, and B. Schiele. 2D Human Pose Estimation: New Benchmark and State of the Art Analysis. In Conference on Computer Vision and Pattern Recognition, 2014.
  • [BenAbdelkader and Yacoob(2008)] C. BenAbdelkader and Y. Yacoob. Statistical Body Height Estimation from a Single Image. In Automated Face and Gesture Recognition, pages 1–7, 2008.
  • [Burton and Rule(2013)] C.M. Burton and N.O. Rule. Judgments of Height from Faces are Informed by Dominance and Facial Maturity. Social Cognition, 31(6):672–685, 2013.
  • [Cao et al.(2017)Cao, Simon, Wei, and Sheikh] Z. Cao, T. Simon, S. Wei, and Y. Sheikh. Realtime Multi-Person 2D Pose Estimation Using Part Affinity Fields. In Conference on Computer Vision and Pattern Recognition, 2017.
  • [Charles et al.(2016)Charles, Pfister, Magee, Hogg, and Zisserman] J. Charles, T. Pfister, D. Magee, D. Hogg, and A. Zisserman. Personalizing human video pose estimation. In IEEE Conference on Computer Vision and Pattern Recognition, 2016.
  • [Dagar et al.(2016)Dagar, Hudait, Tripathy, and Das] D. Dagar, A. Hudait, H. K. Tripathy, and M. N. Das. Automatic emotion detection model from facial expression. In 2016 International Conference on Advanced Communication Control and Computing Technologies, pages 77–85. IEEE, 2016. ISBN 978-1-4673-9545-8. doi: 10.1109/ICACCCT.2016.7831605.
  • [Dey et al.(2014)Dey, Nangia, Ross, Keith, and Liu] R. Dey, M. Nangia, W. Ross, W. Keith, and Y. Liu. Estimating Heights from Photo Collections: A Data-Driven Approach. In ACM conference on Online social network, 2014.
  • [Dong et al.(2016)Dong, Liu, and Lian] Y. Dong, Y. Liu, and S. Lian. Automatic age estimation based on deep learning algorithm. Neurocomputing, 187:4–10, 2016. ISSN 0925-2312. doi: 10.1016/J.NEUCOM.2015.09.115.
  • [Duyar and Pelin(2003)] I. Duyar and C. Pelin. Body height estimation based on tibia length in different stature groups. American Journal of Physical Anthropology, 122(1):23–27, 2003. ISSN 0002-9483. doi: 10.1002/ajpa.10257.
  • [Georgiev et al.(2013)Georgiev, , Yu, Lumsdaine, and Goma] T. Georgiev, , Z. Yu, A. Lumsdaine, and S. Goma. Lytro Camera Technology: Theory, Algorithms, Performance Analysis. In Multimedia Content and Mobile Devices, 2013.
  • [Gordon et al.(1989)Gordon, Churchill, Clauser, Bradtmiller, and McConville] C.C. Gordon, T. Churchill, C.E. Clauser, B. Bradtmiller, and J.T. McConville. Anthropometric survey of us army personnel: methods and summary statistics 1988. Technical report, Anthropology Research Project Inc Yellow Springs OH, 1989.
  • [Guan(2009)] Y. Guan. Unsupervised human height estimation from a single image. 2(6):425–430, 2009. doi: 10.4236/jbise.2009.26061.
  • [Hartley and Zisserman(2000)] R. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, 2000.
  • [He et al.(2016)He, Zhang, Ren, and Sun] K. He, X. Zhang, S. Ren, and J. Sun. Deep Residual Learning for Image Recognition. In Conference on Computer Vision and Pattern Recognition, pages 770–778, 2016.
  • [Ionescu et al.(2014)Ionescu, Papava, Olaru, and Sminchisescu] C. Ionescu, I. Papava, V. Olaru, and C. Sminchisescu. Human3.6M: Large Scale Datasets and Predictive Methods for 3D Human Sensing in Natural Environments. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014.
  • [J. Vester()] J. Vester. Estimating the Height of an Unknown Object in a 2D Image. KTH CSIC.
  • [Kato and Higashiyama(1998)] K. Kato and A. Higashiyama. Estimation of height for persons in pictures. Perception & psychophysics, 60(8):1318–28, 1998. ISSN 0031-5117.
  • [King(2009)] D.E. King. Dlib-ml: A Machine Learning Toolkit. Journal of Machine Learning Research, 10:1755–1758, 2009.
  • [Li et al.(2011)Li, Nguyen, Ma, Jin, Do, and Kim] S. Li, V.H. Nguyen, M. Ma, C. Jin, T.D. Do, and H. Kim. A simplified nonlinear regression method for human height estimation in video surveillance. EURASIP Journal on Image and Video Processing, 2011. doi: 10.1186/s13640-015-0086-1.
  • [Lin et al.(2014)Lin, Maire, Belongie, Hays, Perona, Ramanan, Dollár, and Zitnick] T-.Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C.L. Zitnick. Microsoft COCO: Common Objects in Context. In European Conference on Computer Vision, pages 740–755, 2014.
  • [Ljungberg and Sönnerstam()] J. Ljungberg and J. Sönnerstam. Estimation of human height from surveillance camera footage -a reliability study.
  • [Malli et al.(2016)Malli, Aygun, and Ekenel] R.C. Malli, M. Aygun, and H.K. Ekenel. Apparent Age Estimation Using Ensemble of Deep Learning Models. IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2016.
  • [Martinez et al.(2017)Martinez, Hossain, Romero, and Little] J. Martinez, R. Hossain, J. Romero, and J.J. Little. A Simple Yet Effective Baseline for 3D Human Pose Estimation. In International Conference on Computer Vision, 2017.
  • [Mather(1996)] G. Mather. Image blur as a pictorial depth cue. In Proc. R. Soc. Lond. B, volume 263, pages 169–172. The Royal Society, 1996.
  • [Mather(2010)] G. Mather. Head and Body Ratio as a Visual Cue for Stature in People and Sculptural Art. Perception, 39(10):1390–1395, 2010. ISSN 0301-0066. doi: 10.1068/p6737.
  • [Mehta et al.(2017a)Mehta, Rhodin, Casas, Fua, Sotnychenko, Xu, and Theobalt] D. Mehta, H. Rhodin, D. Casas, P. Fua, O. Sotnychenko, W. Xu, and C. Theobalt. Monocular 3D Human Pose Estimation in the Wild Using Improved CNN Supervision. In International Conference on 3D Vision, 2017a.
  • [Mehta et al.(2017b)Mehta, Sridhar, Sotnychenko, Rhodin, Shafiei, Seidel, Xu, Casas, and Theobalt] D. Mehta, S. Sridhar, O. Sotnychenko, H. Rhodin, M. Shafiei, H. Seidel, W. Xu, D. Casas, and C. Theobalt. Vnect: Real-Time 3D Human Pose Estimation with a Single RGB Camera. In ACM SIGGRAPH, 2017b.
  • [Pavlakos et al.(2017a)Pavlakos, Zhou, Derpanis, Konstantinos, and Daniilidis] G. Pavlakos, X. Zhou, K. Derpanis, G. Konstantinos, and K. Daniilidis. Coarse-To-Fine Volumetric Prediction for Single-Image 3D Human Pose. In Conference on Computer Vision and Pattern Recognition, 2017a.
  • [Pavlakos et al.(2017b)Pavlakos, Zhou, Konstantinos, and Kostas] G. Pavlakos, X. Zhou, K. Derpanis. G. Konstantinos, and D. Kostas. Harvesting Multiple Views for Marker-Less 3D Human Pose Annotations. In Conference on Computer Vision and Pattern Recognition, 2017b.
  • [Popa et al.(2017)Popa, Zanfir, and Sminchisescu] A-.I. Popa, M. Zanfir, and C. Sminchisescu. Deep Multitask Architecture for Integrated 2D and 3D Human Sensing. In Conference on Computer Vision and Pattern Recognition, 2017.
  • [Re et al.(2013)Re, Debruine, Jones, and Perrett] D.E. Re, L.M. Debruine, B.C. Jones, and D.I. Perrett. Facial Cues to Perceived Height Influence Leadership Choices in Simulated War and Peace Contexts. Evolutionary Psychology, 11(1):89–103, 2013.
  • [Rogez et al.(2017)Rogez, Weinzaepfel, and Schmid] G. Rogez, P. Weinzaepfel, and C. Schmid. Lcr-Net: Localization-Classification-Regression for Human Pose. In Conference on Computer Vision and Pattern Recognition, 2017.
  • [Rothe et al.()Rothe, Timofte, and Gool] R. Rothe, R. Timofte, and L.V. Gool. Deep expectation of real and apparent age from a single image without facial landmarks. International Journal of Computer Vision.
  • [Shi et al.(2015)Shi, Chen, Wang, Yeung, Wong, and Woo] X. Shi, Z. Chen, H. Wang, D. Yeung, W. Wong, and W. Woo. Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting. In Advances in Neural Information Processing Systems, pages 802–810, 2015.
  • [Shiang(1999)] T.Y. Shiang. A statistical approach to data analysis and 3-D geometric description of the human head and face. Proceedings of the National Science Council, Republic of China. Part B, Life sciences, 23(1):19–26, 1999. ISSN 0255-6596.
  • [Sigal et al.(2010)Sigal, Balan, and Black] L. Sigal, A. Balan, and M. J. Black. Humaneva: Synchronized Video and Motion Capture Dataset and Baseline Algorithm for Evaluation of Articulated Human Motion. International Journal of Computer Vision, 2010.
  • [Tekin et al.(2017)Tekin, Márquez-neila, Salzmann, and Fua] B. Tekin, P. Márquez-neila, M. Salzmann, and P. Fua. Learning to Fuse 2D and 3D Image Cues for Monocular Body Pose Estimation. In International Conference on Computer Vision, 2017.
  • [Tomè et al.(2017)Tomè, Russell, and Agapito] D. Tomè, C. Russell, and L. Agapito. Lifting from the deep: Convolutional 3d pose estimation from a single image. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5689–5698, 2017.
  • [Tome et al.(2017)Tome, Russell, and Agapito] D. Tome, C. Russell, and L. Agapito. Lifting from the Deep: Convolutional 3D Pose Estimation from a Single Image. In arXiv preprint, arXiv:1701.00295, 2017.
  • [Wang et al.(2015)Wang, Guo, and Kambhamettu] X. Wang, R. Guo, and C. Kambhamettu. Deeply-Learned Feature for Age Estimation. In 2015 IEEE Winter Conference on Applications of Computer Vision, pages 534–541. IEEE, 2015. ISBN 978-1-4799-6683-7. doi: 10.1109/WACV.2015.77.
  • [Wegrzyn et al.(2017)Wegrzyn, Vogt, Kireclioglu, Schneider, and Kissler] M. Wegrzyn, M. Vogt, B. Kireclioglu, J. Schneider, and J. Kissler. Mapping the emotional face. How individual face parts contribute to successful emotion recognition. PLOS ONE, 12(5):e0177239, 2017. ISSN 1932-6203. doi: 10.1371/journal.pone.0177239.
  • [Wilson et al.(2010)Wilson, Herrmann, and Jantz] R.J. Wilson, N.P. Herrmann, and L.M. Jantz. Evaluation of Stature Estimation from the Database for Forensic Anthropology. Journal of Forensic Sciences, 55(3):684–689, 2010. ISSN 00221198. doi: 10.1111/j.1556-4029.2010.01343.x.
  • [Wu et al.()Wu, He, Sun, and Tan] X. Wu, R. He, Z. Sun, and T. Tan. A Light CNN for Deep Face Representation with Noisy Labels.
  • [Zaslan et al.()Zaslan, Yaar, Can, Nci Zaslan, Tugcu, and Koç] A. Zaslan, M. Yaar, I. Can, I. Nci Zaslan, H. Tugcu, and S. Koç. Estimation of stature from body parts.
  • [Zhou et al.(2016)Zhou, Jiang, Zhang, Zhang, and Wang] X. Zhou, P. Jiang, X. Zhang, B. Zhang, and F. Wang. The Measurement of Human Height Based on Coordinate Transformation. pages 717–725. Springer, Cham, 2016. doi: 10.1007/978-3-319-42297-8_66.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description