Bio-LSTM: A Biomechanically Inspired Recurrent Neural Network for 3D Pedestrian Pose and Gait Prediction

Bio-LSTM: A Biomechanically Inspired Recurrent Neural Network for 3D Pedestrian Pose
and Gait Prediction

Xiaoxiao Du, Ram Vasudevan, and Matthew Johnson-Roberson This work was supported by a grant from Ford Motor Company via the Ford-UM Alliance under award N022884.X. Du and M. Johnson-Roberson are with Department of Naval Architecture and Marine Engineering, University of Michigan, Ann Arbor, MI 48109 USA; mattjr@umich.eduR. Vasudevan is with the Department of Mechanical Engineering, University of Michigan, Ann Arbor, MI 48109 USA

In applications such as autonomous driving, it is important to understand, infer, and anticipate the intention and future behavior of pedestrians. This ability allows vehicles to avoid collisions and improve ride safety and quality. This paper proposes a biomechanically inspired recurrent neural network (Bio-LSTM) that can predict the location and 3D articulated body pose of pedestrians in a global coordinate frame, given 3D poses and locations estimated in prior frames with inaccuracy. The proposed network is able to predict poses and global locations for multiple pedestrians simultaneously, for pedestrians up to 45 meters from the cameras (urban intersection scale). The outputs of the proposed network are full-body 3D meshes represented in Skinned Multi-Person Linear (SMPL) model parameters. The proposed approach relies on a novel objective function that incorporates the periodicity of human walking (gait), the mirror symmetry of the human body, and the change of ground reaction forces in a human gait cycle. This paper presents prediction results on the PedX dataset, a large-scale, in-the-wild data set collected at real urban intersections with heavy pedestrian traffic. Results show that the proposed network can successfully learn the characteristics of pedestrian gait and produce accurate and consistent 3D pose predictions.

Deep learning in robotics and automation, gesture, posture and facial expressions, kinematics, long short-term memory (LSTM), pedestrian gait prediction

I Introduction

Imagine that an autonomous vehicle is driving towards a crowded urban intersection. It is important to identify moving pedestrians and anticipate where a pedestrian, or a group of pedestrians, may be in a few seconds to decide whether and when to brake. Imagine also that a robot is serving as a tour guide in a museum [1] or in a shopping mall packed with pedestrians [2]. It is essential for the robot to recognize the orientation and location of persons around to provide better guidance and avoid running into pedestrians. In these scenarios, accurate pedestrian pose and location prediction has a huge impact in facilitating more effective human-robot/vehicle interaction and collision avoidance.

Human pose estimation has been heavily studied in the literature [3, 4, 5, 6, 7, 8]. However, prior work has primarily focused on estimating the joint locations of a human skeleton model from a single, static RGB image in the current frame and does not address the pose-prediction problems for future frames. More recently, researchers have begun investigating the prediction (forecasting and anticipation) of human body pose given a video sequence [9, 10, 11, 12, 13, 14, 15, 16]. Most of this work focuses on a skeleton-based representation for joint locations. Moreover, some studies such as [10, 11] are limited to predicting the 2D pose of a single human subject, usually centered in a video frame. On the other hand, deep learning techniques, especially recurrent neural networks, have proven to be effective in predicting future frames in natural video sequences [17, 18]. However, these approaches focus on pixel-level prediction on images and do not specifically work with human pose representations (skeleton or mesh).

This paper focuses on two novel aspects of the problem: predicting a full-body 3D mesh and doing so for multiple humans simultaneously. Furthermore, we attempt to constrain the problem using the well-studied biomechanics of human walking while using the contextual information within urban-intersection environments. Note that in some of the literature, the terms “pose prediction” and “pose estimation” are used interchangeably, both referring to the task of estimating a pose (usually skeleton-based joint locations) from a single image (the current frame) [19, 20]. In this paper, we use the term “prediction” to refer to the specific task of predicting/forecasting 3D pedestrian pose and location in future frames in a sequence, assuming the 3D poses were already estimated in a prior frame. The estimation of the initial 3D pose model is outside of the scope of this paper, but is described in depth in Kim et al. [21]

We propose bio-LSTM, a biomechanically inspired recurrent neural network to solve this task. The proposed network takes previously estimated pose parameters in past frames as input and outputs a full-body 3D mesh of a pedestrian pose, localized in a global coordinate system in metric space at future timesteps. Our network can predict multiple pedestrians in each frame at real intersection scales (up to 45 meters), and the mesh representation contains richer information about the body shape and scale that traditional skeletal representations lack [22]. The proposed network is based on the long short-term memory (LSTM) network [23] with inspiration from the biomechanics of human gait, such as the bilateral/mirror symmetry of the human body [24], the periodicity of human walking (gait) [25], and the change of ground reaction force in a human gait cycle [26, 27].

We present experimental results of our proposed network on the PedX dataset [21], a large-scale, in-the-wild dataset collected at real urban intersections with heavy pedestrian traffic in the city of Ann Arbor, Michigan, USA. In addition to the PedX intersection dataset, we also collected and annotated an evaluation dataset in a controlled outdoor environment with a motion capture (mocap) system. We compare our prediction to both the 3D labels generated by a novel optimization procedure [21] and the mocap ground truth to verify the accuracy of our method. Results show successful and accurate body pose prediction for both next-frame and multiple timesteps.

The contributions of this paper include: {enumerate*}

full-body 3D mesh prediction in addition to skeleton-based joint locations in global coordinate frame and in metric space;

a novel biomechanics-based loss function in the LSTM network to ensure realistic and naturalistic pose prediction; and

in-the-wild gait and pose prediction for multiple pedestrians given noisy urban intersection data. We envision our work having applications in the development of legged robots, rehabilitation, and robot-assisted physiotherapy, in addition to our original motivation in the autonomous driving and human-robot interaction contexts. We present longer-term prediction results, which also enables evasive maneuvers and path planning using the prediction information as well as semantic interpretation of the pedestrian’s actions in the future.

This paper is organized as follows: Section I introduces the problem of 3D human forecasting and motivates our work. Section II describes related work in sequence prediction and introduces the SMPL model [28], a parametric body-shape model that we use to represent the 3D human pose. We also describe related work in gait analysis, where we drew inspiration for our network formulation. Section III describes our proposed network and bio-inspired loss function. Section IV describes the PedX dataset and the experimental setup. Section V presents our prediction results on both next-frame and multiple frame forecasts. Section VI presents our conclusions and future work.

Ii Related Work

In this section, we first describe related works on video sequence prediction. Then, we describe the SMPL model that we use to represent 3D human pose. We also describe the related works in gait biomechanics that inspired our method.

Ii-a Sequence Prediction

Recurrent Neural Networks (RNN) have shown effective results in learning temporal dynamics in a sequence [29]. The LSTM network [23], in particular, has been widely used in the literature for sequence prediction due to its ability to learn long-term dependencies [30, 31, 32, 33]. Recently, the LSTM networks have been applied to predicting future image-based frames in natural video sequences, such as PredNet [17] and MCnet [18]. However, these studies mainly focus on video image sequences and usually use convolution operations to take advantage of the pixel spatial layout in the image.

For the specific task of human pose prediction, previous research has investigated predicting joint locations in future frames given past video sequences [9, 10, 11, 12]. However, in most of these studies, the human pose is represented simply by joint locations in a skeleton and visualized by overlaying the skeleton on the 2D image. Moreover, Toyer et al. [10] and Fragkiadak et al. [11] are limited to predicting 2D pose for a single human subject centered in a video sequence. However, these assumptions do not always hold. For videos collected at a crowded urban intersection, there are multiple pedestrians moving simultaneously, and some pedestrians can be quite far from the camera. Additionally, skeleton-based joint locations may not always accurately represent the full human-body pose. For example, Figure 0(b) and Figure 0(c) both have the same wrist location and a very small difference in hand-joint location, yet Figure 0(c) shows a biologically unfeasible body pose in the mesh. Therefore, it is important to predict the 3D full-body mesh to represent the pose in addition to skeleton-based joint locations.

Fig. 1: An illustration of a human pose skeleton and full-body mesh. (a) A rest pose (T-pose) skeleton. (b) A SMPL [28] full-body mesh at the rest pose. (c) Another SMPL full-body mesh with the same skeleton joint location as (a), but with biologically unfeasible wrist rotations (marked in circles). (d) The zoomed-in view of the biologically unfeasible wrist-joint in (c). The rotation on the wrist has turned degrees, but the wrist joint location remains the same.

Ii-B 3D Human Pose Representation

In this paper, we represent the 3D human pose using the Skinned Multi-Person Linear (SMPL) model [28]. We selected the SMPL representation because {enumerate*}

it can represent varying human-body shapes and poses accurately and realistically [28];

the output is a full-body 3D mesh in addition to traditional skeleton-based 3D joint locations [28, 34]; and

it is a parametric statistical model that can easily represent the location, pose, and shape of a person by a vector of parameters. The SMPL model has been used widely in image-to-pose estimation [19, 20, 35], yet few previous work exists on predicting/forecasting SMPL models into the future, particularly in global coordinate frames.

The SMPL model is formulated by three types of parameters, translation , pose , and shape . The 3D body mesh is notated as . The translation (“trans”) has three parameter values, indicating the global translation (distance in meters from the data capture system to the person) in x, y, and z axes. The pose parameters consist of the axis-angle representation of the relative rotation of 23 joints in a skeleton rig of the body and three root orientation parameters in x, y, and z axes (a total of 72 parameters) [28]. The shape has 10 parameter values and indicates the body shape of the person. Under this formulation, the task of predicting a 3D human pose becomes that of predicting 85 (=3 +72 +10) SMPL parameters.

Ii-C Gait Biomechanics

In addition to maintaining a feasible body pose (i.e., avoiding twists such as in Figure 0(d)), it is important to take the biomechanical characteristics of human gait into consideration. Gait analysis is a long-standing field of study and has had enormous impact on human locomotion and the development of bipedal robots [25, 27, 36, 37]. For the specific task of pedestrian-walking pose prediction, we review related works in human gait studies and draw inspiration from three prominent biomechanical characteristics: mirror symmetry of human body, gait periodicity, and the change of ground reaction force in a human gait cycle in our network.

The bilateral/mirror symmetry of a healthy human body has long been observed in the literature [38, 39, 40]. When the legs are positioned symmetrically along the center of the hip, the person is in balance. As shown in Figure 2, it is desirable that (also see rest pose in Figure 0(b)). Similar symmetry can be observed for the two shoulder joints as well [41].

Fig. 2: An illustration for human symmetry. is the angle between the left leg and the orthogonal line to the ground plane that runs through the center of the hip, and is the angle between the right leg and the center line. (a) An example when . (b) An example when . (c) An example when . Among these three poses, pose (c) is the most stable and most similar to the natural human leg pose during standing/walking.

Cyclic leg movement is another important feature in human gait [25, 42]. It has been observed that humans walk with rhythmic and periodic motion [43]. Step after step, a person’s leg movement follows the cyclic motion with the assumption that all successive cycles are approximately the same as the first when traveling at a constant speed [25]. In addition, it is assumed that the speed, stride, and direction during a normal walking cycle, and all successive cycles, do not suddenly change without an external force (e.g., a person does not suddenly flip during normal walking) [44]. We observe such periodicity in our proposed network.

In addition, sufficient ground reaction forces (GRFs) are needed to support the body during walking [25]. The GRFs are applied through the feet, which means at least a part of one foot must be in contact with the ground [25]. To this end, we compute a local ground plane at the scene and map our body mesh prediction to ensure physically plausible contact between the feet and the ground.

Iii Method

The goal of our network is to predict 3D full-body meshes in future frames, given 3D poses in past frames. Figure 3 illustrates the network diagram of our proposed approach. Details about the network architecture and error functions are described in the following subsections.

Fig. 3: An illustration for our proposed network. This illustration is inspired by the network diagram in [45] with the network architecture modified for our specific design. The inputs and outputs of the network are vectors of SMPL parameters for all pedestrians in the scene. The bio-constraints were enforced through the training objectives in the network. For MTP, the predictions were continuously fed back to the network to predict all future timesteps.

Iii-a Network Architecture

We implemented a two-layer stacked LSTM recurrent neural network followed by a densely-connected neural network (NN) layer as our basic network architecture. This architecture was inspired by the LSTM-3LR method [11]. We experimented with the number of layers (ranging from one to five) and found that the root mean square prediction error (RMSE) stopped decreasing after adding layer three in our experiments; therefore, we settled on a two-layer stacking architecture. We used this LSTM structure to predict both SMPL translation and pose parameters (3 translation parameters and 72 pose parameters, respectively). We define as the look-back window length in the training sequences, is the total number of training sequences, and is the parameter dimensions ( for translation and for pose parameters). Thus, the input size of the network is . The dimension is because we use frame difference as part of our training objective functions, which will be further described in Section III-B. We assume the shape parameters (10 beta parameters) of each person remains the same as the previous frame (the person’s body shape does not change from frame to frame). Each LSTM layer consists of 32 units (determined through experimentation). Section III-B describes, in detail, our bio-inspired training objective function (the error module in Figure 3). Section III-C describes our procedure for next-frame prediction. Section III-D describes our procedure for the multiple-timestep prediction (MTP).

Iii-B Training Objectives

We incorporate the three prominent biomechanical characteristics: gait periodicity, mirror symmetry of human body, and change of ground reaction force (GRF) in a human gait cycle in the training objectives of our network.

Fig. 4: An illustration for periodicity loss by predicting the frame difference (next-frame prediction).

First, to address gait periodicity, we express the periodicity loss as the mean absolute error between the frame difference in the prediction sequence and the “true” frame difference in the training data. We illustrate the process (when ) in Figure 4. Given the translation and pose parameters for the last timesteps as , our goal is to predict translation and pose parameters for the next timestep . Based on the assumption that the speed, stride, and direction do not suddenly change during walking cycles [44], we assume that the differences between frames remain steady. Also, the legs retain a cyclic motion. Therefore, we transform the problem into predicting the difference between frames. We define for the difference at timestep . We then use as inputs to our network and predict as output. Then, our output translation and pose at time is given by . Thus, the periodicity loss for the sequence can be expressed as:


Second, as discussed in Section II-C, a person is stable when the left and right legs and shoulder joints are in mirror symmetry. Thus, we can write the loss based on body mirror symmetry as:


where and are the angles between the left and right legs and the center vertical line at the upper thigh joints, and and are the angles between the left and right arms and the center vertical line at the shoulder joints.

Lastly, in order to provide sufficient ground reaction forces, we constrain the feet to the ground. Given ground elevation at each person’s location in each frame, we minimize the volume between the feet and the ground, as shown in Figure 5. We simplify the volume model between the feet and the ground as the sum of the volumes of a rectangular cube (shaded in pink) and a triangular prism (shaded in green). We do so for both feet so, in sum, at least some transfer of force is occurring between the feet and the ground. We also encourage more ground contact– humans generally use their full plantar aspect (the underside/sole of their feet) during walking and do not usually tiptoe [46]. Thus, the volume loss from the ground plane is written as:


where is the width of the human foot, is the vertical distance between the heel of the foot to the ground, is the length of the human foot, and is the angle between the foot and the horizontal ground plane. Note this requires a local ground plane estimate. In our case, this is derived from LiDAR data from an Autonomous Vehicle (AV), but could also be estimated from stereo or other monocular vanishing point cues [47, 48, 49]. The , , and values are estimated from the SMPL rest pose.

Fig. 5: An illustration for the ground constraint on the feet. (a) 2D view of the space between the feet and ground. (b) Pink shade: rectangular cube; Green shade: a triangular prism. (c) With mathematical notations.

Therefore, our training objective (total loss function) can be written as:


where is the loss from the gait cycle, is the loss based on body mirror symmetry, is the loss based on volume from the ground plane, and and are user-set regularization parameters to adjust the weighting of bio-inspired loss function terms. In our following experiments, we set and (determined through loop testing).

Iii-C Next-Frame Prediction

We formulate the next-frame prediction as a supervised learning problem. First, we construct training, validation, and testing sequences by creating batches from all pose sequences of length , which we denote by for all in the dataset. The first poses were the inputs to the network and the last is the next-frame target to be predicted. When , we use the the proposed 2LR-LSTM network with training objective (4) for prediction. When (only given one frame to predict the next), we define the frame difference to be the median frame difference in all training data and apply such frame difference to predict the next frame, assuming that a person follows the leading direction of the population flow [50].

Iii-D Multiple Timestep Prediction

In multiple-timestep prediction, given , we first predict . Then, this prediction at time is fed back to the network and we predict the pose at based on the sequence . This process is marked as “MTP” (dashed line) in Figure 3. In this way, we can continuously output poses at time , , , …, for any timestep in the future.

Iv The PedX Dataset and Experimental Setup

This section first describes the PedX dataset, the in-the-wild pedestrian pose dataset used for the experiments. Then, the baseline methods used for comparison and the evaluation metrics are described. The data pre-processing procedure for the PedX dataset is also presented.

Iv-a The PedX Dataset

The PedX dataset [21] was collected in 2017 in real urban intersections in downtown Ann Arbor, Michigan, USA. The dataset contains collections from three four-way-stop intersections with heavy pedestrian traffic. The PedX dataset contains over 10,000 pedestrian poses and over 1800 continuous sequences of varying length (average sequence length is six frames). The PedX dataset consists of data from two stereo RGB camera pairs and four Velodyne LiDAR sensors. The camera videos were collected at approximately six frames per second (fps). We collected this dataset from a parked car facing the intersection and recorded in-the-wild pedestrian behavior (pedestrians span a range of 5-45 meters from the cameras). The 3D pedestrian pose in each frame was obtained by optimizing the manually-annotated 2D pedestrian pose and 3D LiDAR point cloud as described in Kim et al. [21]. Given such (known) 3D pedestrian poses (also called “3D training labels”) in a few frames in past sequences, our proposed network predicts the 3D pedestrian pose in the next frame and multiple timesteps in the future.

The PedX dataset also contains an evaluation dataset collected and annotated in a controlled outdoor environment with a motion capture (mocap) system (named “mocap dataset”). The mocap dataset was collected using the same setup as with the intersection data, but only contains one pedestrian with mocap markers. We evaluate the performance of our proposed method on the mocap dataset also, since the mocap ground-truths were available [21].

Iv-B Baseline Methods

We compare our proposed bio-LSTM network with several baseline methods. We first compare our network with the two-layer stacked LSTM recurrent neural network followed by a densely-connected NN layer as a state-of-the-art baseline method (denoted the “2LR-LSTM” method as described in [11]) without the bio-constraints. The standard 2LR-LSTM is trained on 1) the skeleton-based 3D joint locations (denoted “skeleton joints” in the following tables) and 2) the SMPL parameters directly (denoted “trans.+pose”). The input size of this baseline network is , as defined in Section III-A.

Then, we compare our work with the “frame difference” baseline method [51]. In this baseline method, 3D pedestrian poses are predicted by computing the difference in translation and pose parameters in the past frames and then applying this difference to future frames. For example, as shown in Figure 4, we compute . Then, the predicted translation and pose at equals translation and pose at plus . This baseline method essentially enforces the constriant, but does not train an LSTM network.

In addition, we analyze the effect of each loss term in our bio-inspired objective function and summarize the results for using different loss terms in an ablation study.

Iv-C Evaluation Metrics

The outputs of our proposed bio-LSTM network are 85 SMPL parameters. Note that we assume the 10 shape parameters do not change from frame to frame for each person. From the SMPL parameters, we compute the locations of the 6890 vertices that forms the 3D full-body mesh, according to Loper et al. [28]. In this paper, we evaluate our method using the vertex root-mean-square error (vertex RMSE) as well as the standard 3D mean-per-joint-position error (MPJPE) [52, 21]. As the MPJPE only evaluates skeleton-based joint locations and does not capture differences in the full mesh, vertex RMSE is helpful in evaluating biologically infeasible poses such as Figure 0(d). We also computed the RMSE in global translation as well as the mean-per-joint-angular error (MPJAE) [53] on all 24 joint angles.

Iv-D Data Pre-Processing

In our prediction experiments, we normalize the translation and pose parameters. The translation parameters are normalized by their max and min ranges in x, y, and z axes, and the joint angle magnitudes are normalized between . In our PedX experiments, we use 85% of data sequences as training, 10% of data sequences as validation, and 5% of data sequences as testing. This split scheme was selected to ensure a large number of training sequences as well as enough test data to evaluate our results. The sequences were randomly shuffled during training and we report the mean and standard deviation across three random initializations.

Our training labels came from the previous 3D pose estimation optimization method of Kim et al. [21] Although their method achieves state-of-the art estimation results, there is still noise when capturing data from the vehicle due to measurement inaccuracy in the 3D LiDAR point cloud data and the long observation range. In our prediction experiments, we eliminated noisy models (“outliers”), such as frames with large distance within a sequence or sudden change of root orientation, as shown in Figure 6. We present our prediction results trained on both filtered and noisy labels to show that our proposed method can handle such noise robustly.

Fig. 6: An illustration of noisy poses (“outliers”) from the field data. (a) A person model with a sudden jump in translation. Currently, the translation distance threshold . (b) A person model with wrong body orientation in a sequence (marked with red arrow).

V Results and Discussion

In this section, we present results for next-frame prediction on both the PedX and the mocap data. The standard deviation across three random initializations are presented in parentheses in the following tables.

Table I presents results on next-frame prediction on the PedX dataset with look-back window length . The value was chosen as a pedestrian generally completes a walking cycle in 5-6 frames in the PedX dataset. Table II presents results on next-frame prediction on the mocap dataset with . Our method is able to achieve around 85mm error (full-body mesh in global frame) in outdoor intersection data and 73mm error in mocap data, with the global translation range of approximately 45 meters (thus, an error rate of 0.16%-0.19%). The average angle error is 13.5. Furthermore, in both experiments, our proposed network yields better prediction results (lower RMSE error) in translation, joints, vertex, and angle. We observed that the gait periodicity loss () was the most prominent feature and produced much smaller error compared with baseline methods (36.8% improvement over predicting only skeleton joints and 21.0% improvement on vertex RMSE). Adding the mirror symmetry constraint () enabled modest performance gain (around 1.6%). Figure 7 shows a qualitative example of our prediction results.

Fig. 7: A qualitative example for predicted pedestrian 3D poses in a walking cycle. The green meshes are predicted by our network and the red meshes are the “ground truth” labels optimized in [21].
Methods trans MPJPE vertex MPJAE()
Skeleton Joints 130.8(18.1)
Trans.+Pose 81.6(18.9) 102.2(16.8) 104.4(15.4) 16.1(1.8)
Frame Diff. 61.6(3.2) 109.6(10.5) 107.8(9.79) 23.9(3.4)
Ours() 52.9(2.7) 82.6(5.7) 85.2(5.4) 15.8(2.0)
Ours() 53.0(2.8) 82.6(5.6) 84.8(5.1) 15.8(1.8)
TABLE I: Next-Frame prediction results on the PedX dataset, . In all tables, the unit for trans, MPJPE, and vertex error is .
Methods trans MPJPE vertex MPJAE()
Skeleton Joints 182.9(42.0)
Trans.+Pose 73.8(27.6) 101.1(23.2) 108.1(21.6) 15.5(0.2)
Frame Diff. 72.3(23.3) 84.8(15.5) 87.1(15.0) 11.3(1.4)
Ours() 48.8(1.2) 68.2(0.8) 73.8(0.7) 11.3(0.1)
Ours() 48.6(1.2) 67.4(1.3) 72.6(1.3) 11.2(0.0)
TABLE II: Next-Frame prediction results on the mocap dataset, .
Methods trans MPJPE vertex MPJAE()
Trans.+Pose on PedX 158.8(19.1) 180.0(19.2) 172.1(19.4) 16.2(1.3)
Ours on PedX 144.6(10.2) 165.4(10.9) 164.8(10.3) 19.9(1.8)
Trans.+Pose on mocap 107.7(18.4) 130.4(18.4) 132.5(15.4) 17.0(0.5)
Ours on mocap 77.8(13.4) 91.0(9.9) 93.2(9.7) 11.5(1.0)
TABLE III: Next-Frame prediction results on the PedX and mocap dataset, .
Methods trans MPJPE vertex MPJAE()
Skeleton Joints 292.8(32.2)
Trans.+Pose 223.7(26.9) 231.7(26.6) 236.0(25.4) 17.0(0.5)
Frame Diff. 87.6(5.7) 95.6(5.4) 97.7(5.3) 10.7(0.5)
Ours () 65.7(0.1) 82.5(0.3) 85.6(0.3) 11.7(0.1)
Ours () 65.6(0.3) 80.1(0.6) 83.3(0.7) 10.8(0.0)
TABLE IV: Next-Frame prediction results on the mocap data, trained with noisy labels from PedX dataset, .

Table III shows the prediction results when , i.e., prediction without pose information from prior frames. From the table we can observe that our method still outperforms the standard 2LR-LSTM prediction without biomechanical constraints. The error is significantly smaller in mocap data than in the PedX data, as there is only one pedestrian in the mocap data and the frame difference is more regular than the in-the-wild intersection data with multiple pedestrians.

Table IV shows the prediction results on mocap data, using models trained with noisy training labels that reflect the errors one typically seeing in real field data, as described in Section IV-D. As can be seen, the baseline methods have significantly higher error due to noise in the input data, yet our proposed methods yield almost comparable prediction results.

Table V shows the ground distance error of our prediction results on the PedX data. We compare ground distance of our prediction results before and after adding the loss term. We also report the ground distance error from the previously estimated poses [21]. It can be seen that the loss term was able to constrain the feet closer to the local ground plane. The remaining error is likely due to an estimation error of the local ground plane from the LiDAR point cloud data, and the simplified volume loss model in (3). The estimated lengths and widths of the foot and leg also change slightly due to the human body shape, which may contribute to the ground distance error as well.

Methods distance to the ground()
Existing model [21] 32.4(0.9)
Ours() 40.1(2.1)
Ours() 29.5(4.0)
TABLE V: Distance to the ground after adding .

Table VI shows the vertex RMSE results across different actions evaluated on a subset of annotated PedX dataset. The actions under investigation include: simply walking (1307 sequences), carrying a coffee cup in the right hand (10 sequences), carrying something in the left hand (53 sequences), carrying/playing with cellphones (58 sequences), pushing a bicycle (45 sequences), and cycling (9 sequences). The mean and standard deviation values (in parentheses) were reported across sequences. As can be seen, our method was able to achieve lower vertex RMSE in all actions except for cycling. Among all the actions, simply walking has the lowest vertex RMSE, which is as expected since our objective functions were focused on walking gait and there are a large number of walking sequences in the dataset. The large error in cycling action was partly due to the limited number of frames in the dataset and that cycling and walking have significantly different stride. In this case, our network (which was trained for pedestrian walking poses) was not able to predict as well for cyclists, while the frame difference method was able to preserve more accurate cycling stride (i.e., the translation difference). However, it is worth noting that, even with limited cycling training data, our network is still able to predict biomechanically feasible poses for cyclists.***Additional mesh prediction results and examples can be viewed in our supplementary video. © IEEE Xplore Digital Library Link is:

Actions simply walking cup carry (left hand)
Frame Diff. 97.8(37.6) 108.3(16.9) 92.6(30.0)
Ours 73.2(30.9) 90.0(21.3) 73.7(32.6)
Actions phone push bike cycling
Frame Diff. 110.0(37.3) 97.4(38.9) 119.0(31.5)
Ours 91.2(33.3) 85.7(33.3) 331.1(66.5)
TABLE VI: Vertex RMSE results () on different actions in the annotated PedX dataset, .

Given five observed frames, we ran multi-timestep prediction for 31 timesteps into the future to form a sequence of approximately six seconds in total. We evaluated on 196 in-the-wild pedestrian sequences with equal or longer than 36 frames in the PedX dataset. Figure 7(a) and 7(b) show the vertex RMSE results for MTP prediction of our proposed network and all baseline methods, compared with the ground truth (observed) poses. Note the first five timesteps were given as training data, so the corresponding errors were zero across all methods. We observed that the errors of the comparison methods increased drastically over time, as the noise overcame the system. On the other hand, our proposed network was able to achieve much lower error in comparison.

We further analyzed our MTP prediction performance. We noticed that, in several cases, pedestrians move with a high degree of stochasticity (sudden turns, crossing the crosswalk multiple times in different directions, etc.). In these cases, the network predicted the person walking with a smooth trajectory but going to a completely different direction, and the error ends up being really large (5 meters after 6 seconds). This effect of stochasticity in human motion was also reported in [54, 11, 14]. These cases contributed to a higher mean vertex RMSE, especially when predicting far into the future. When we plot the median of translation error as shown in Figure 7(c) and 7(d), our network was able to achieve approximately 10cm error after one second and less than 80cm after 6 seconds, while comparison methods can be up to 7 meters off. The Frame Difference baseline method also did reasonably well in translation RMSE (still not as good as our method). However, if we look at the actual predicted pose of the person, the frame difference method yields rather unrealistic and biomechanically infeasible poses, likely due to the linear pose prediction based simply on frame difference. Our method, on the other hand, maintains a steady walking gait, as shown in Figure 9.*V

Fig. 8: The plots of MTP results. X-axis: time in seconds(s) with an increment of 1/6s (time per frame). Y-axis: vertex RMSE and median of translation RMSE results with error bar. (a)(c) Overall MTP comparison results between our proposed method and baseline. (b)(d) Zoomed-in view.
Fig. 9: A qualitative example for MTP prediction. The green meshes are predicted poses and the red meshes are the “ground truth” labels as optimized in [21]. Both methods have low translation error, but our method preserves a steady walking gait while frame difference method yields unrealistic and biomechanically infeasible poses.

Our proposed network was implemented in Python 3.6 using the Keras framework [55]. With the current unoptimized code, the prediction takes approximately 1ms for each person in each frame on a desktop computer with Intel i7 3.60GHz CPU with two NVIDIA TITAN X GPUs. Future work will include applying the approach in real-time data capture and prediction in autonomous vehicle applications.

Vi Conclusion

This paper proposes bio-LSTM, a biomechanically inspired recurrent neural network for 3D pedestrian pose and gait prediction. Bio-LSTM is able to predict the global location and 3D full-body mesh with articulated body pose in the metric space. Evaluating our method on PedX, a large-scale, in-the-wild urban intersection pedestrian dataset, we predict more accurate and biologically feasible body poses than the current state-of-the art. Furthermore, our network is robust to noise in the training data.

Currently, this work is focused on pedestrian pose prediction at urban intersections, which has applications in planning human-oriented, pedestrian-friendly intersections and smart cities. In addition, our work may benefit gait studies of bipedal robots and be applied to the monitoring and development of clinical gait rehabilitation systems. We provided detailed analysis on a variety of human actions in the intersection environment and showed improved prediction results on all pedestrian (non-cyclist) actions. It is possible to extend this work to predict other activities, such as running, as well. Also, we currently assume independence between the pedestrians. It would be interesting to consider constraints to accommodate multiple persons in the same space [15]. Future work will also include incorporating pedestrian-pedestrian and car-pedestrian interactions.

Our novel objective function took the first step in imposing biomechanical constraints on pedestrian gait prediction. However, there are many aspects of human gait characteristics that can be further investigated, such as the dynamical asymmetry in gaits [56] and change of foot pressure on different parts of the foot in a human gait cycle [26, 27]. In addition, although the body shapes were optimized in the previous work and used in our prediction, we did not make a point to differ between genders and simply used a gender-neutral SMPL mesh. However, it has been shown in literature that men and women have different stride lengths and it is possible to distinguish individual gait for each person [24]. By using the frame difference (the constraint), in a way, we inherently assume each person maintains their own stride and personal gait characteristics. However, it is possible to further investigate such individual gait characteristics in pose prediction.

In addition, it would be interesting to extend our current work to varying sequence length (varying ) and sequences with finer time resolution. It is also possible to explore imposing biomechanical constraints on alternative network architectures such as the QuaterNet [13]. Future work will also include combining pose estimation and prediction for an end-to-end pose analysis system.


The authors thank Wonhui Kim for making the 3D pose estimation results on the PedX data available [21]. The authors also thank Charles Barto for his work in visualizing the 3D SMPL mesh models.


  • [1] S. Thrun, M. Bennewitz, W. Burgard, A. B. Cremers, F. Dellaert, D. Fox, D. Hahnel, C. Rosenberg, N. Roy, J. Schulte et al., “Minerva: A second-generation museum tour-guide robot,” in IEEE Int. Conf. Robot. Autom. (ICRA), vol. 3, 1999, pp. 1999–2005.
  • [2] Y. Luo, P. Cai, A. Bera, D. Hsu, W. S. Lee, and D. Manocha, “Porca: Modeling and planning for autonomous driving among many pedestrians,” IEEE Robot. Autom. Lett. (RA-L), vol. 3, no. 4, pp. 3418–3425, 2018.
  • [3] S. Li and A. B. Chan, “3d human pose estimation from monocular images with deep convolutional neural network,” in Asian Conf. Comput. Vis. (ACCV).   Springer, 2014, pp. 332–347.
  • [4] S. Park, J. Hwang, and N. Kwak, “3d human pose estimation using convolutional neural networks with 2d pose information,” in Eur. Conf. Comput. Vis. (ECCV).   Springer, 2016, pp. 156–169.
  • [5] A. Toshev and C. Szegedy, “Deeppose: Human pose estimation via deep neural networks,” in IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2014, pp. 1653–1660.
  • [6] E. Simo-Serra, A. Quattoni, C. Torras, and F. Moreno-Noguer, “A joint model for 2d and 3d pose estimation from a single image,” in IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2013, pp. 3634–3641.
  • [7] R. A. Güler, N. Neverova, and I. Kokkinos, “Densepose: Dense human pose estimation in the wild,” arXiv preprint arXiv:1802.00434, 2018.
  • [8] C. Zimmermann, T. Welschehold, C. Dornhege, W. Burgard, and T. Brox, “3d human pose estimation in rgbd images for robotic task learning,” in IEEE Int. Conf. Robot. Autom. (ICRA), 2018, pp. 1986–1992.
  • [9] Y.-W. Chao, J. Yang, B. L. Price, S. Cohen, and J. Deng, “Forecasting human dynamics from static images.” in IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2017, pp. 3643–3651.
  • [10] S. Toyer, A. Cherian, T. Han, and S. Gould, “Human pose forecasting via deep markov models,” in IEEE Int. Conf. Digit. Image Comput.: Techniques and Applications (DICTA), 2017, pp. 1–8.
  • [11] K. Fragkiadaki, S. Levine, P. Felsen, and J. Malik, “Recurrent network models for human dynamics,” in IEEE Int. Conf. Comput. Vis. (ICCV), 2015, pp. 4346–4354.
  • [12] J. Walker, K. Marino, A. Gupta, and M. Hebert, “The pose knows: Video forecasting by generating pose futures,” in IEEE Int. Conf. Comput. Vis. (ICCV), 2017, pp. 3352–3361.
  • [13] D. Pavllo, D. Grangier, and M. Auli, “Quaternet: A quaternion-based recurrent model for human motion,” arXiv preprint arXiv:1805.06485, 2018.
  • [14] A. Jain, A. R. Zamir, S. Savarese, and A. Saxena, “Structural-rnn: Deep learning on spatio-temporal graphs,” in IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2016, pp. 5308–5317.
  • [15] A. Zanfir, E. Marinoiu, and C. Sminchisescu, “Monocular 3d pose and shape estimation of multiple people in natural scenes–the importance of multiple scene constraints,” in IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2018, pp. 2148–2157.
  • [16] J. Martinez, M. J. Black, and J. Romero, “On human motion prediction using recurrent neural networks,” in IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2017, pp. 4674–4683.
  • [17] W. Lotter, G. Kreiman, and D. Cox, “Deep predictive coding networks for video prediction and unsupervised learning,” arXiv preprint arXiv:1605.08104, 2016.
  • [18] R. Villegas, J. Yang, S. Hong, X. Lin, and H. Lee, “Decomposing motion and content for natural video sequence prediction,” arXiv preprint arXiv:1706.08033, 2017.
  • [19] V. Tan, I. Budvytis, and R. Cipolla, “Indirect deep structured learning for 3d human body shape and pose prediction,” in British Mach. Vis. Conf. (BMVC), vol. 3, no. 5, 2017, pp. 1–11.
  • [20] G. Pavlakos, L. Zhu, X. Zhou, and K. Daniilidis, “Learning to estimate 3d human pose and shape from a single color image,” arXiv preprint arXiv:1805.04092, 2018.
  • [21] W. Kim, M. Srinivasan Ramanagopal, C. Barto, K. Rosaen, M.-Y. Yu, N. Goumas, R. Vasudevan, and M. Johnson-Roberson, “Pedx: Benchmark dataset for metric 3d pose estimation of pedestrians in complex urban intersections,” arXiv preprint arXiv:1809.03605, 2018. [Online]. Available:
  • [22] F. Bogo, A. Kanazawa, C. Lassner, P. Gehler, J. Romero, and M. J. Black, “Keep it smpl: Automic estimation of 3d human pose and shape from a single image,” in Eur. Conf. Comput. Vis. (ECCV).   Springer, 2016, pp. 561–578.
  • [23] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.
  • [24] N. F. Troje, C. Westhoff, and M. Lavrov, “Person identification from biological motion: effects of structural and kinematic cues,” Perception & Psychophysics, vol. 67, no. 4, pp. 667–675, 2005.
  • [25] C. L. Vaughan, B. L. Davis, and J. C. O’Connor, Dynamics of human gait, C. L. Vaughan, Ed.   Cape Town, South Africa: Kiboho Publishers, 1999.
  • [26] K. Kong and M. Tomizuka, “Smooth and continuous human gait phase detection based on foot pressure patterns,” in IEEE Int. Conf. Robot. Autom. (ICRA), 2008, pp. 3678–3683.
  • [27] D. A. Winter, “Human balance and posture control during standing and walking,” Gait & posture, vol. 3, no. 4, pp. 193–214, 1995.
  • [28] M. Loper, N. Mahmood, J. Romero, G. Pons-Moll, and M. J. Black, “Smpl: A skinned multi-person linear model,” ACM Trans. Graphics, vol. 34, no. 6, p. 248, 2015. [Online]. Available:
  • [29] Z. C. Lipton, J. Berkowitz, and C. Elkan, “A critical review of recurrent neural networks for sequence learning,” arXiv preprint arXiv:1506.00019, 2015.
  • [30] L. Sun, K. Jia, K. Chen, D.-Y. Yeung, B. E. Shi, and S. Savarese, “Lattice long short-term memory for human action recognition,” in IEEE Int. Conf. Comput. Vis. (ICCV), 2017, pp. 2166–2175.
  • [31] S. Park, B. Kim, C. M. Kang, C. C. Chung, and J. W. Choi, “Sequence-to-sequence prediction of vehicle trajectory via lstm encoder-decoder architecture,” in IEEE Intell. Vehicles (IV) Symp., 2018, pp. 1672–1678.
  • [32] L. Liu, Y. Zhou, and L. Shao, “Dap3d-net: Where, what and how actions occur in videos?” in IEEE Int. Conf. Robot. Autom. (ICRA), 2017, pp. 138–145.
  • [33] L. Sun, Z. Yan, S. M. Mellado, M. Hanheide, and T. Duckett, “3dof pedestrian trajectory prediction learned from long-term autonomous mobile robot deployment data,” in IEEE Int. Conf. Robot. Autom. (ICRA), 2018, pp. 1–7.
  • [34] A. Kanazawa, M. J. Black, D. W. Jacobs, and J. Malik, “End-to-end recovery of human shape and pose,” in IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2018.
  • [35] C. Lassner, J. Romero, M. Kiefel, F. Bogo, M. J. Black, and P. V. Gehler, “Unite the people: Closing the loop between 3d and 2d human representations,” in IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), vol. 2, 2017, pp. 4704–4713.
  • [36] T. Shajina and P. B. Sivakumar, “Human gait recognition and classification using time series shapelets,” in IEEE Int. Conf. Adv. Comput. Commun. (ICACC), 2012, pp. 31–34.
  • [37] M. L. Felis and K. Mombaur, “Synthesis of full-body 3-d human gait using optimal control methods,” in IEEE Int. Conf. Robot. Autom. (ICRA), 2016, pp. 1560–1566.
  • [38] M. Mert Ankarali, S. Sefati, M. S. Madhav, A. Long, A. J. Bastian, and N. J. Cowan, “Walking dynamics are symmetric (enough),” J. Royal Soc. Interface, vol. 12, no. 108, 2014.
  • [39] Y. B. Xu, C. H. Wang, P. Zalzal, O. Safir, and L. Guan, “Analysis of human gait bilateral symmetry for functional assessment after an orthopaedic surgery,” in Int. Conf. Image Anal. Recog.   Springer, 2009, pp. 627–636.
  • [40] M. M. Ankarali, “Variability, symmetry, and dynamics in human rhythmic motor control,” Ph.D. dissertation, Johns Hopkins Univ., 2015.
  • [41] T. Ramakrishnan, C.-A. Lahiff, and K. B. Reed, “Comparing gait with multiple physical asymmetries using consolidated metrics,” Frontiers in neurorobotics, vol. 12, no. 2, 2018.
  • [42] J.-H. Yoo and M. S. Nixon, “Markerless human gait analysis via image sequences,” in Int. Soc. Biomechanics XIXth Congr., Dunedin, NZ, 2003.
  • [43] B. Sun, X. Liu, X. Wu, and H. Wang, “Human gait modeling and gait analysis based on kinect,” in IEEE Int. Conf. Robot. Autom. (ICRA), 2014, pp. 3173–3178.
  • [44] M. K. Y. Mak, A. Patla, and C. Hui-Chan, “Sudden turn during walking is impaired in people with parkinson’s disease,” Experimental brain research, vol. 190, no. 1, pp. 43–51, 2008.
  • [45] C. Vondrick, H. Pirsiavash, and A. Torralba, “Anticipating visual representations from unlabeled video,” in IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2016, pp. 98–106.
  • [46] A. Phillips and S. McClinton, “Gait deviations associated with plantar heel pain: A systematic review,” Clinical Biomechanics, vol. 42, pp. 55–64, 2017.
  • [47] J. Zhao, J. Katupitiya, and J. Ward, “Global correlation based ground plane estimation using v-disparity image,” in IEEE Int. Conf. Robot. Autom. (ICRA), 2007, pp. 529–534.
  • [48] B. Micusk, H. Wildenauer, and M. Vincze, “Towards detection of orthogonal planes in monocular images of indoor environments,” in IEEE Int. Conf. Robot. Autom. (ICRA), 2008, pp. 999–1004.
  • [49] J. S. Gardner, J. L. Austerweil, and S. E. Palmer, “Vertical position as a cue to pictorial depth: Height in the picture plane versus distance to the horizon,” Attention, Perception, & Psychophysics, vol. 72, no. 2, pp. 445–453, 2010.
  • [50] Y. Zhao and H. Zhang, “A unified follow-the-leader model for vehicle, bicycle and pedestrian traffic,” Transp. Research Part B: Methodological, vol. 105, pp. 315–327, 2017.
  • [51] C. Zhan, X. Duan, S. Xu, Z. Song, and M. Luo, “An improved moving object detection algorithm based on frame difference and edge detection,” in IEEE Int. Conf. Image and Graphics (ICIG), 2007, pp. 519–523.
  • [52] C. Ionescu, D. Papava, V. Olaru, and C. Sminchisescu, “Human3.6m: Large scale datasets and predictive methods for 3d human sensing in natural environments,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 36, no. 7, pp. 1325–1339, 2014.
  • [53] T. von Marcard, R. Henschel, M. J. Black, B. Rosenhahn, and G. Pons-Moll, “Recovering accurate 3d human pose in the wild using imus and a moving camera,” in Eur. Conf. Comput. Vis. (ECCV), 2018.
  • [54] J. Bütepage, H. Kjellström, and D. Kragic, “Anticipating many futures: Online human motion prediction and generation for human-robot interaction,” in IEEE Int. Conf. Robot. Autom. (ICRA), 2018, pp. 1–9.
  • [55] F. Chollet et al., “Keras,”, 2015.
  • [56] Z. Liu and S. Sarkar, “Improved gait recognition by gait dynamics normalization,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, no. 6, pp. 863–876, 2006.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description