In-bed Pressure-based Pose Estimation using Image SpaceRepresentation Learning

In-bed Pressure-based Pose Estimation using Image Space Representation Learning

Abstract

Recent advances in deep pose estimation models have proven to be effective in a wide range of applications such as health monitoring, sports, animations, and robotics. However, pose estimation models fail to generalize when facing images acquired from in-bed pressure sensing systems. In this paper, we address this challenge by presenting a novel end-to-end framework capable of accurately locating body parts from vague pressure data. Our method exploits the idea of equipping an off-the-shelf pose estimator with a deep trainable neural network, which pre-processes and prepares the pressure data for subsequent pose estimation. Our model transforms the ambiguous pressure maps to images containing shapes and structures similar to the common input domain of the pre-existing pose estimation methods. As a result, we show that our model is able to reconstruct unclear body parts, which in turn enables pose estimators to accurately and robustly estimate the pose. We train and test our method on a manually annotated public pressure map dataset using a combination of loss functions. Results confirm the effectiveness of our method by the high visual quality in the generated images and the high pose estimation rates achieved.

\name

Vandad Davoodnia, Saeed Ghorbani, Ali Etemad \addressDepartment of Electrical and Computer Engineering, Queen’s University, Kingston, Canada
Department of Electrical Engineering & Computer Science, York University, Toronto, Canada

{keywords}

In-Bed Pose Estimation, Smart Beds, Pre-processing, Deep learning

1 Introduction

Sleeping makes up a third of human’s life-span. As a result of recent advances in science, sleep studies, especially data-driven techniques, have attracted many researchers to the field. Moreover, low-cost processing and monitoring systems have enabled the utilization of sleep-related technologies in smart homes and clinics, paving the way for considerable impacts on health and quality of life.

Generally, sleep-related research includes the study of complications in the respiratory system, insomnia, and movement-related disorders [9]. Furthermore, it is shown that movement and posture during sleep have critical impacts on disorders such as sleep apnea [15] and pressure ulcers [2, 16]. As a result, monitoring posture in smart home and clinical settings is of great importance in order to identify or prevent the occurrence of such disorders.

Sleep monitoring technologies, such as textile-based pressure recording systems, have enabled ubiquitous and automated monitoring of movement, allowing for their use in both clinical health-care and research[13, 17]. However, most of the studies using such systems are limited to coarse pose identification [17, 24, 23, 8, 7], namely left, supine, and right postures. However, in clinics, it is critical to obtain information about specific pressure points and the relative pose of the limbs with respect to the body [6, 18, 20]. Consequently, in-bed body part localization and pose estimation has recently attracted researchers [5]. Nonetheless, such related works are very scarce and very little work has been performed in this area.

Figure 1: Input pressure maps recorded using a mattress with embedded sensors is presented (top row) where weak and vague pressure points are observed. The estimated poses using our proposed end-to-end method are presented (bottom row).

With the recent advances in deep learning, a number of data-driven methods have been developed for pose estimation using natural camera-based images for a wide range of applications such as animation, robotics, sports, and human tracking [19, 14, 21, 3, 4]. Although these models are capable of achieving strong body pose recognition, they perform poorly when used on matrices acquired through in-bed pressure-mapping systems, mostly due to the difference in postures and low-pressure points such as the head, knees, and hands. In this paper, we address this issue by proposing a learnable pre-processing block that enables off-the-shelf pose estimation models to generalize to pressure maps as well.

In this paper, we propose an end-to-end framework that allows off-the-shelf pose estimation models to be used for in-bed pose estimation, regardless of challenges such as weak, noisy, or vague pressure points (see Figure 1). Our method includes two main modules, PolishNet, a fully-convolutional hourglass neural network that processes the input pressure maps such that the output lies within the subsequent conventional off-the-shelf pose estimator, pre-learned on camera-based images. We use OpenPose [3] as the pre-learned pose estimator in our end-to-end pipeline. In our architecture, we keep the parameters of the pose estimator frozen, while training PolishNet to minimize the Part Affinity Field (PAF) and heatmap losses between the annotated images and predicted outputs. We show that our end-to-end method out-performs the mere use of pose estimation methods significantly and detects limb positions robustly in a leave-one-subject-out validation strategy. Furthermore, we demonstrate that our network is capable of generating proper images for the pose estimators, even when trained on only a limited number of subjects, while single pose estimation models require large datasets to perform appropriately. Finally, to further evaluate the performance of our approach with other pose estimators, we swap OpenPose with another pose estimator, DeeperCut [12, 11], and observe robust performance, which points to the modular and generalized nature of our proposed solution.

2 Proposed Method

Overview: Our goal is to learn a pre-processing step that receives the pressure data as inputs and synthesizes images such that a pre-trained pose estimation module shows stable and accurate performance. In other words, the output data from the learner should lie on the data manifold used by the pose estimation module. Therefore, this learnable pre-processing step, which we call PolishNet, converts the pressure data to polished images that better resemble human figures as expected by the commonly available pose estimation models.

Figure 2: Our proposed framework is presented. PolishNet is designed to learn the transformation from raw pressure maps into polished images, which can be fed directly to the pre-trained OpenPose module. The objective incorporates heatmap and PAF losses to force the PolishNet to synthesize completed body parts, as well as pixel loss to discourage the polished images to largely deviate from the pressure maps.

Our pipeline consists of two blocks: (1) PolishNet which filters the pressure maps (2) A pre-trained pose estimation network that generates a heatmap for every body part that corresponds to their location. As illustrated in Figure 2, PolishNet utilizes a combination of loss functions, namely the pixel mean-squared error of an image space between input and the PolishNet output, a heatmap loss corresponding to the body part positions, and a Part Affinity Fields (PAF) loss for body part identification. By training the pipeline using the mentioned loss functions, we ensure the generation of polished images consistent with the pose estimation networks’ input data manifold while keeping the general properties of the input image.

Architecture and Loss: Let be the input pressure data. Pre-processing the input is then performed by a variant of an hourglass network P called PolishNet. Our proposed network (see Figure 2) contains three encoder blocks of Conv-Conv-BatchNorm-LeakyReLu and three blocks of DeConv-DeConv-BatchNorm-LeakyReLu on the decoder side. By utilizing the encoder-decoder blocks, we enable the network to capture the properties of the pressure data and incorporate pose and shape information in the latent space to generate the desired image in the polished data space , which is compatible with the pose estimation module. Accordingly, is the output of PolishNet, where are the network’s trainable parameters.

To estimate the body pose and train PolishNet, we utilize OpenPose [3], a fast and reliable pose estimation network () capable of accurately detecting different body joints known as keypoints. The output of OpenPose includes several heatmaps () and PAF () for each keypoint and its connections. Each heatmap is a D distribution of the belief that a keypoint is located on each pixel, while the PAF is defined as a D vector field connecting two limbs, encoding both position and the orientation of the connection. For our purposes, we only utilize the visible keypoints of the head, neck, shoulders, elbows, wrists, ankles, knees, and the hip, for a total of heatmaps and PAF for their connections. Accordingly, we define , where is a set of trainable parameters, and and are the estimates of the ground-truth heatmaps and PAF respectively.

Next, we define an objective function with a heatmap term , a PAF term , and a pixel loss term for training PolishNet as follows:

(1)
(2)
(3)

The final objective function is then defined by:

(4)

where the first two terms force PolishNet to synthesize images containing the correct pose, and the last term helps PolishNet to maintain the original pressure image’s shape information.

 

Head & Neck R Shoulder R Elbow R Wrist R Hip R Knee R Ankle
OpenPose only 19.4 27.1 73.0 19.1 55.6 15.4 28.1 14.4 67.2 25.5 56.4 30.9 37.8 20.3
Proposed Method 92.9 19.2 92.1 19.7 89.7 20.0 83.1 24.4 93.9 20.1 94.1 18.4 93.6 19.5

 

Neck L Shoulder L Elbow L Wrist L Hip L Knee L Ankle
OpenPose only 79.2 18.8 73.6 21.9 57.4 20.1 30.1 17.7 4.5 7.4 55.4 30.9 39.6 21.0
Proposed Method 94.9 18.1 92.2 19.7 91.3 19.7 85.4 23.4 93.6 19.5 94.0 18.9 93.2 18.6

 

Table 1: The area under the PCK curves and their standard deviations are presented for our proposed method and OpenPose only.

Implementation Details: We implement the pipeline using TensorFlow on an NVIDIA Titan XP GPU. The convolution kernels of the PolishNet module were with a stride of , and the LeakyReLu use a negative activation coefficient of . Bigger kernel sizes interpolate the body shapes better at the cost of losing input-output image similarity. We use an Adam optimizer to train the pipeline for epochs with a batch size of . We use a learning rate of , which we decay with a rate of for every update iterations. Finally, and are both set to , while is empirically set to to allow PolishNet to focus more on reconstructing the vague pressure points of the body.

3 Experiment Setup and Results

3.1 Data Preparation

We used the PmatData dataset [17, 10] to train and test our pressure-based pose estimation approach. The pressure data have been recorded by the Force Sensitive Application (FSA) pressure mapping mattress. The mattress was equipped with sensors, inch apart from each other. The recording was performed with a frequency of Hz for a pressure range of - mmHg. subjects, with a height range of - cm, a weight range of - Kg, and an age range of - years participated in the experiment of sleeping on the mattress in a total of unique pose.

The PmatData dataset does not contain the ground-truth join labels needed for training and evaluation purposes. Therefore, we developed and utilized a tool in MATLAB for annotating the body part locations, and subsequently labeled pressure maps. We then implemented an annotation tool that automatically annotated the rest of the pressure maps for each subject and each posture using similarity in the image space based on the sum of squared errors. This was possible since each subject appears to be in a very similar (almost identical) posture with small variances in terms of general position during the majority of recording sessions given posture class.

Next, we applied a spatio-temporal median filter on the input data to remove the noise generated from occasional sensor values measuring unexpected values. We also removed frames recorded at the time of transition between sleeping poses, since in some cases, they did not show a clear image of the body.

3.2 Performance Evaluation

To evaluate the performance of our pipeline on the annotated data, we used the probability of correct keypoint (PCK) evaluation metric, which is a measure of joint localization accuracy [22]. Accordingly, we measure the distance between the detected and ground-truth keypoints, and if this distance is below a certain threshold, the keypoint in question is considered as true-positive. The threshold is defined as a fraction of the person’s size, where the size is defined as the distance between the person’s left shoulder and right hip [1]. To perform a thorough evaluation of our method, we use a leave-one-subject-out cross-validation, where we leave subjects out for validation and use the remaining subjects for training.

Figure 3: Examples of the performance of the proposed architecture in estimating pose from input pressure maps.
Figure 4: Illustration of examples where PolishNet has reconstructed weak body parts. Specifically note the arms, knees, and the head.

We evaluate our method by comparing the performance of OpenPose on colorized pressure maps vs. the proposed pipeline, including PolishNet. The area under the PCK curves are presented in Table 1, demonstrating that for all the body parts, our proposed pipeline considerably outperforms the use of only OpenPose on the colorized pressure maps. Low standard deviations in our test results indicate the consistency our model. It is observed from the table that for challenging body parts with weak pressure points, such as the wrists, or for ones with completely different appearances, such as the head, PolishNet provides consistently accurate images, eventually resulting in accurate pose estimation.

Some examples depicting the performance of our method are presented in Figure 3. It is seen that in most cases, OpenPose alone is not able to correctly identify poses without PolishNet, if at all. Moreover, we observe that our proposed PolishNet + OpenPose pipeline accurately identifies the poses for vague input pressure maps. Since PolishNet is trained to synthesize images compatible with the image space by which OpenPose was trained, the polished outputs show a higher resemblance to common standing human poses. In Figure 4, we notice that PolishNet reconstructs and connects the limbs and weak pressure areas that are not clearly visible in the pressure maps. We have highlighted some of these reconstructed regions in Figure 4. Moreover, in some instances, PolishNet even attempts to interestingly synthesize outfits for the subjects to make the output images look more natural and consistent with the input image space of the pose estimator. See Figure 3 row , columns , , , , and for the synthesized outfit-like patches, especially around the hip and torso areas.

 

Model Average Detection Rate

 

OpenPose only 47.7 10.8
PolishNet + OpenPose 95.8 0.3
DeeperCut only 54.1 1.3
Pre-trained PolishNet + DeeperCut 80.9 2.4

 

Table 2: Quantitative evaluation of different models is presented. OpenPose and DeeperCut are tested with and without a PolishNet that has been pre-trained for OpenPose. For both OpenPose and DeeperCut, the original and frozen versions are used.

To further evaluate our method, we freeze PolishNet after training with OpenPose as the pose estimation module, then swap OpenPose with another popular pose estimation model, in this case, DeeperCut [12, 11]. We then test the pipeline with pressure maps. The average area under the PCK curves for all the body parts and respective standard deviations are provided for each architecture in Table 2. As expected, PolishNet + OpenPose achieves the highest average detection rate since PolishNet is trained in a pipeline where OpenPose is used as the pose identification module. Interestingly, the pre-trained PolishNet followed by DeeperCut outperforms pose estimation with DeeperCut alone, improving the average detection rate by . This further demonstrates that the images synthesized by PolishNet lie on, or close to, the manifolds with which most pose estimation models are trained. This allows us to use the pre-trained PolishNet as a learned pre-processor for enhancing ambiguous pressure maps, followed by any pre-trained pose identification block that may be selected based on available resources, constraints, and other properties.

4 Conclusions

Deep pose estimators are capable of detecting users’ pose from natural images, while failing on data acquired from other devices such as pressure mapping systems, which are gaining popularity for health- and sleep-related research. In this paper, we addressed this issue by presenting a novel framework for in-bed pose estimation using an off-the-shelf pose estimation network, OpenPose, equipped with a learnable pre-processing block, called PolishNet. Using this design, our end-to-end model not only allows for pose estimation models to detect body parts with high accuracy, but also uses PolishNet to reconstruct vague and ambiguous body parts, such as wrists and knees. Furthermore, PolishNet results in synthesized images that can be used by other pose estimators as well. Our evaluation on a public dataset, PmatData, showed a detection rate with a leave-one-subject-out strategy, and when tested with another pose estimation network, namely DeeperCut. Finally, our proposed model can be effectively implemented for smart homes and clinical settings for ubiquitous and unobtrusive sleep monitoring.

5 Acknowledgements

The Titan XP GPU used for this research was donated by the NVIDIA Corporation.

References

  1. M. Andriluka, L. Pishchulin, P. Gehler and B. Schiele (2014) 2D human pose estimation: new benchmark and state of the art analysis. In IEEE Conference on computer Vision and Pattern Recognition (CVPR), pp. 3686–3693. Cited by: §3.2.
  2. J. Black, M. M. Baharestani, J. Cuddigan, B. Dorner, L. Edsberg, D. Langemo, M. E. Posthauer, C. Ratliff and G. Taler (2007) National pressure ulcer advisory panel’s updated pressure ulcer staging system. Advances in Skin & Wound Care 20 (5), pp. 269–274. Cited by: §1.
  3. Z. Cao, T. Simon, S. Wei and Y. Sheikh (2017) Realtime multi-person 2d pose estimation using part affinity fields. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1302–1310. Cited by: §1, §1, §2.
  4. Y. Chen, Z. Wang, Y. Peng, Z. Zhang, G. Yu and J. Sun (2018) Cascaded pyramid network for multi-person pose estimation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7103–7112. Cited by: §1.
  5. H. M. Clever, Z. Erickson, A. Kapusta, G. Turk, K. Liu and C. C. Kemp (2020) Bodies at rest: 3d human pose and shape estimation from a pressure image using synthetic data. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6215–6224. Cited by: §1.
  6. J. P. S. Cunha, H. M. P. Choupina, A. P. Rocha, J. M. Fernandes, F. Achilles, A. M. Loesch, C. Vollmar, E. Hartl and S. Noachtar (2016) NeuroKinect: a novel low-cost 3d video-eeg system for epileptic seizure motion quantification. PLoS One 11 (1), pp. e0145669. Cited by: §1.
  7. V. Davoodnia and A. Etemad (2019) Identity and posture recognition in smart beds with deep multitask learning. In IEEE International Conference on Systems, Man and Cybernetics (SMC), pp. 3054–3059. Cited by: §1.
  8. V. Davoodnia, M. Slinowsky and A. Etemad (2020) Deep multitask learning for pervasive bmi estimation and identity recognition in smart beds. Journal of Ambient Intelligence and Humanized Computing, pp. 1–15. Cited by: §1.
  9. J. A. Douglas, C. L. Chai-Coetzer, D. McEvoy, M. T. Naughton, A. M. Neill, P. Rochford, J. Wheatley and C. Worsnop (2017) Guidelines for sleep studies in adults–a position statement of the australasian sleep association. Sleep Med 36 (Suppl 1), pp. S2–S22. Cited by: §1.
  10. A. L. Goldberger, L. A. N. Amaral, L. Glass, J. M. Hausdorff, P. Ch. Ivanov, R. G. Mark, J. E. Mietus, G. B. Moody, C. Peng and H. E. Stanley (2000-06) PhysioBank, PhysioToolkit, and PhysioNet: components of a new research resource for complex physiologic signals. Circulation 101 (23), pp. e215–e220. External Links: Document, ISSN 0009-7322, Link Cited by: §3.1.
  11. E. Insafutdinov, M. Andriluka, L. Pishchulin, S. Tang, E. Levinkov, B. Andres and B. Schiele (2017) ArtTrack: articulated multi-person tracking in the wild. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1293–1301. Cited by: §1, §3.2.
  12. E. Insafutdinov, L. Pishchulin, B. Andres, M. Andriluka and B. Schiele (2016) Deepercut: a deeper, stronger, and faster multi-person pose estimation model. In European Conference on Computer Vision (ECCV), pp. 34–50. Cited by: §1, §3.2.
  13. A. Q. Javaid, R. Gupta, A. Mihalidis and S. A. Etemad (2017) Balance-based time-frequency features for discrimination of young and elderly subjects using unsupervised methods. In IEEE EMBS International Conference on Biomedical & Health Informatics (BHI), pp. 453–456. Cited by: §1.
  14. L. Ke, M. Chang, H. Qi and S. Lyu (2018) Multi-scale structure-aware network for human pose estimation. arXiv preprint arXiv:1803.09894. Cited by: §1.
  15. C. H. Lee, D. K. Kim, S. Y. Kim, C. Rhee and T. Won (2015) Changes in site of obstruction in obstructive sleep apnea patients according to sleep position: a dise study. The Laryngoscope 125 (1), pp. 248–254. Cited by: §1.
  16. J. J. Liu, M. Huang, W. Xu and M. Sarrafzadeh (2014) Bodypart localization for pressure ulcer prevention. In Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 766–769. Cited by: §1.
  17. S. Ostadabbas, M. B. Pouyan, M. Nourani and N. Kehtarnavaz (2014) In-bed posture classification and limb identification. In IEEE Biomedical Circuits and Systems Conference, pp. 133–136. Cited by: §1, §3.1.
  18. M. J. Peterson, W. Schwab, J. H. Van Oostrom, N. Gravenstein and L. J. Caruso (2010) Effects of turning on skin-bed interface pressures in healthy adults. Journal of Advanced Nursing 66 (7), pp. 1556–1564. Cited by: §1.
  19. W. Tang, P. Yu and Y. Wu (2018) Deeply learned compositional models for human pose estimation. In European Conference on Computer Vision (ECCV), pp. 190–206. Cited by: §1.
  20. P. S. Walton-Geer (2009) Prevention of pressure ulcers in the surgical patient. Aorn Journal 89 (3), pp. 538–552. Cited by: §1.
  21. W. Yang, S. Li, W. Ouyang, H. Li and X. Wang (2017) Learning feature pyramids for human pose estimation. In IEEE International Conference on Computer Vision (ICCV), pp. 1290–1299. Cited by: §1.
  22. Y. Yang and D. Ramanan (2013) Articulated human detection with flexible mixtures of parts. IEEE Transactions on Pattern Analysis and Machine Intelligence 35 (12), pp. 2878–2890. Cited by: §3.2.
  23. R. Yousefi, S. Ostadabbas, M. Faezipour, M. Farshbaf, M. Nourani, L. Tamil and M. Pompeo (2011) Bed posture classification for pressure ulcer prevention. In IEEE Engineering in Medicine and Biology Society, pp. 7175–7178. Cited by: §1.
  24. A. Zhao, J. Dong and H. Zhou (2020) Self-supervised learning from multi-sensor data for sleep recognition. IEEE Access. Cited by: §1.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
414541
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description