Looking to Relations for Future Trajectory Forecast

Looking to Relations for Future Trajectory Forecast

Chiho Choi and Behzad Dariush
Honda Research Institute USA
{cchoi,bdariush}@honda-ri.com
Abstract

Inferring relational behavior between road users as well as road users and their surrounding physical space is an important step toward effective modeling and prediction of navigation strategies adopted by participants in road scenes. To this end, we propose a relation-aware framework for future trajectory forecast. Our system aims to infer relational information from the interactions of road users with each other and with the environment. The first module involves visual encoding of spatio-temporal features, which captures human-human and human-space interactions over time. The following module explicitly constructs pair-wise relations from spatio-temporal interactions and identifies more descriptive relations that highly influence future motion of the target road user by considering its past trajectory. The resulting relational features are used to forecast future locations of the target, in the form of heatmaps with an additional guidance of spatial dependencies and consideration of the uncertainty. Extensive evaluations on a public benchmark dataset demonstrate the robustness and efficacy of the proposed framework as observed by performances higher than the state-of-the-art methods.

1 Introduction

Forecasting future trajectories of moving participants in indoor and outdoor environments has profound implications for execution of safe and naturalistic navigation strategies in partially and fully automated vehicles [3, 33] and robotic systems [40, 15]. While autonomous navigation of robotic systems in dynamic indoor environments is an increasingly important application that can benefit from such research, the potential societal impact may be more consequential in the transportation domain. This is particularly apparent considering the current race to deployment of automated driving and advanced driving assistance systems on public roads. Such technologies require advanced decision making and motion planning systems that rely on estimates of the future position of road users in order to realize safe and effective mitigation and navigation strategies.

Figure 1: Spatio-temporal features are visually encoded from discretized grid to locally discover (i) human-human ( : womanman) and (ii) human-space interactions ( : manground,  : cyclistcone) over time. Then, their pair-wise relations (i.e.,   ,   ,   ,   ,   , …) with respect to the past motion of the target () are investigated from a global perspective for trajectory forecast.

Related research [37, 1, 29, 18, 9, 10, 34, 36, 25, 26, 38] has attempted to predict future trajectories by focusing on social conventions, environmental factors, or pose and motion constraints. They have shown to be more effective when the prediction model learns to extract these features by considering human111The word ‘human’ refers to any types of road user – pedestrian, car, cyclist, etc. – in the rest of this paper.-human (i.e., between road users) or human-space (i.e., between a road user and space) interactions. Recent approaches [16, 35] have incorporated both interactions to understand behavior of agents toward environments. However, they restrict human interactions to nearby surroundings and overlook the influence of distant obstacles in navigation, which is not feasible in real-world scenarios. In this view, we present a framework where such interactions are not limited to nearby road users nor surrounding medium. The proposed relation-aware approach fully discovers human-human and human-space interactions from local scales and learns to infer relations from these interactions from global scales for future trajectory forecast (see Figure 1).

Relational inference has been presented in [28, 17] to reason the implications of relations over graph structured data. Recently in [27], spatial relations between object pairs are inferred through the relation network pipeline. This approach is inherently flexible to define ‘an object’ as a feature representation extracted from each region of the discretized grid (see the grid in Figure 1) regardless of what exists in that region. Our work is analogous to [27] in the sense that the word ‘object’ is defined. In our framework, an object is a visual encoding of local spatial behavior of road users (if they exist) and environmental representations together with their temporal interactions over time, which naturally corresponds to human-human and human-space interactions. On top of this, we consider learning to infer relational behavior between objects (i.e., spatio-temporal interactions in our context) from a global perspective.

In practice, the relations between all object pairs do not equally contribute to understanding the past and future motion of a specific road user. For example, a distant building behind a car does not have meaningful relational information with the ego-vehicle that is moving forward to forecast its future trajectory. To address the different importance of relations, the prediction model should incorporate a function to selectively weight pair-wise relations based on their potential influence to the future path of the target. Thus, we design an additional relation gate module (RGM) which is inspired by an internal gating process of a long-short term memory (LSTM) unit. Our RGM shares the same advantages of control of information flow through multiple switch gates. While producing relations from spatio-temporal interactions, we enforce the module to identify more descriptive relations that highly influence the future motion of the target by further conditioning on its past trajectory.

An overview of the proposed approach is presented in Figure 2. Our system visually encodes spatio-temporal features (i.e., objects) through the spatial behavior encoder and temporal interaction encoder using a sequence of past images (see Figure 3). The following RGM first infers relational behavior of all object pairs and then focuses on looking at which pair-wise relations will be potentially meaningful to forecast the future motion of the target agent under its past behavior (see Figure 4). As a result, the gated relation encoder (GRE) produces more informative relational features from a target perspective. The next stage of our system is to forecast future trajectory of the target over the next few seconds using the aggregated relational features. Here, we predict future locations in the form of heatmaps to generate a pixel-level probability map which can be (i) further refined by considering spatial dependencies between the predicted locations and (ii) easily extended to learn the uncertainty of future forecast at test time.

The main contributions of this paper are as follows:

  1. Encoding of spatial behavior of road users and environmental representations together with their temporal interactions, which corresponds to human-human and human-space interactions.

  2. Design of relation gating process conditioned on the past motion of the target to capture more descriptive pair-wise relations that have a high potential to affect its future motion.

  3. Prediction of a pixel-level probability map that can be penalized with the guidance of spatial dependencies and extended to learn the uncertainty of the problem.

  4. Improvement of model performance by over the best state-of-the-art method using the proposed framework with aforementioned contributions.

2 Related Work

This section provides a review of deep learning based trajectory prediction methods. We refer the readers to [14] for a review on human action classification, and motion and intention prediction, and  [22] for a review on human interaction, behavior understanding, and decision making.

Human-human interaction oriented approaches Discovering social interactions between humans has been a mainstream approach to predict future trajectories [24, 2, 39, 18, 34, 31]. Following the pioneering work [11] on modeling human-human interactions, similar social models have been presented for the data-driven methods. A social pooling layer was proposed in [1] in between LSTMs to share intermediate features of neighboring individuals across frames, and its performance was efficiently improved in [9]. While successful in many cases, they may fail to provide acceptable future paths in a complex road environment without the guidance of scene context.

Figure 2: Given a sequence of images, the GRE visually analyzes spatial behavior of road users and their temporal interactions with respect to environments. The subsequent RGM of GRE infers pair-wise relations from these interactions and determines which relations are meaningful from a target agent’s perspective. The aggregated relational features are used to generate initial heatmaps through the TPN. Then, the following SRN further refines these initial predictions with a guidance of their spatial dependencies. We additionally embed the uncertainty of the problem into our system at test time.

Human-space interaction oriented approaches Modeling scene context of humans interacting with environments has been introduced as an additional modality to their social interactions. [16] modeled human-space interactions using deep learned scene features of agents’ neighborhood, assuming only local surroundings of the target affect its future motion. However, such restriction of the interaction boundary is not feasible in real-world scenarios and may cause failures of the model toward far future predictions. More recently, [35] expanded local scene context through additional global scale image features. However, their global features rather implicitly provide information about road layouts than explicitly model interactive behavior of humans against road structures and obstacles. In contrast, our framework is designed to discover local human-human and human-space interactions from global scales. We locally encode spatial behavior of road users and environmental representations together with their temporal interactions over time. Then, our model infers relations from a global perspective to understand past and future behavior of the target against other agents and environments.

Human action oriented approaches These approaches rely on action cues of individuals. To predict a future trajectory of pedestrians from first-person videos, temporal changes of orientation and body pose are encoded as one of the features in [36]. In parallel, [10] uses head pose as a proxy to build a better forecasting model. Both methods find that gaze, inferred by the body or head orientation, and the person’s destination are highly correlated. However, as with human-human interaction oriented approaches, these methods may not generalize well to unseen locations as the model does not consider the road layout.

3 Relational Inference

We extend the definition of ‘object’ in [27] to a spatio-temporal feature representation extracted from each region of the discretized grid over time. It enables us to visually discover (i) human-human interactions where there exist multiple road users interacting with each other over time, (ii) human-space interactions from their interactive behavior with environments, and (iii) environmental representations by encoding structural information of the road. The pair-wise relations between objects (i.e., local spatio-temporal features) are inferred from a global perspective. Moreover, we design a new operation function to control information flow so that the network can extract descriptive relational features by looking at relations that have a high potential to influence the future motion of the target.

Figure 3: We model human-human and human-space interactions by visually encoding spatio-temporal features from each region of the discretized grid.

3.1 Spatio-Temporal Interactions

Given past images , we visually extract spatial representations of the static road structures, the road topology, and the appearance of road users from individual frames using the spatial behavior encoder with 2D convolutions. The concatenated features along the time axis are spatial representations . As a result, each entry of contains frame-wise knowledge of road users and road structures in -th region of the given environment. Therefore, we individually process each entry of using the temporal interaction encoder with a 3D convolution to model sequential changes of road users and road structures with their temporal interactions as in Figure 3. We observed that the joint use of 2D convolutions for spatial modeling and 3D convolution for temporal modeling extracts more discriminative spatio-temporal features as compared to alternative methods such as 3D convolutions as a whole or 2D convolutions with an LSTM. Refer to Section 5.2 for detailed description and empirical validation. The resulting spatio-temporal features contains a visual interpretation of spatial behavior of road users and their temporal interactions with each other and with environments. We decompose into a set of objects , where and an object is a -dimensional feature vector.

Figure 4: The relation gate module controls information flow through multiple switches and determines not only whether the given object pair has meaningful relations from a spatio-temporal perspective, but also how important their relations are with respect to the motion context of the target.

3.2 Relation Gate Module

Observations from actual prediction scenarios in road scenes suggest that humans focus on only few important relations that may potentially constrain the intended path, instead of inferring every relational interactions of all road users. In this view, we propose a module which is able to address the benefits of discriminatory information process with respect to their relational importance.

We focused on the internal gating process of an LSTM unit that controls information flow through multiple switch gates. Specifically, the LSTM employs a sigmoid function with a tanh layer to determine not only which information is useful, but also how much weight should be given. The efficacy of their control process leads us to design a relation gate module (RGM) which is essential to generate more descriptive relational features from a target perspective. The structure of the proposed RGM is displayed in Figure 4.

Let be a function which takes as input a pair of two objects and spatial context . Note that is an -dimensional feature representation extracted from the past trajectory of the -th road user observed in . Then, the inferred relational features are described as follows:

(1)

where is the learnable parameters of . Through the function , we first determine whether the given object pair has meaningful relations from a spatio-temporal perspective by computing where is the concatenation of two objects. Note that we add as a subscript of tanh and sigmoid function to present that these functions come after a fully connected layer. Then, we identify how their relations can affect the future motion of the target based on its past motion context by This step is essential in (i) determining whether the given relations would affect the target road user’s potential path and (ii) reasoning about the best possible route, given the motion history of the target. We subsequently collect all relational information from every pair and perform element-wise sum to produce relational features . Note that the resulting is target-specific, and hence individual road users generate unique relational features using the same set of objects with a distinct motion context .

4 Future Trajectory Prediction

The proposed approach aims to predict number of future locations for the target road user using X. Rather than regressing numerical coordinates of future locations, we generate a set of likelihood heatmaps following the success of human pose estimation in [30, 20, 4]. They achieved improved performance over the coordinate regression methods for locating the joint positions using corresponding heatmaps. Similarly in our task, we predict a set of heatmaps and combine the results of all points with the maximum likelihood over a sequence of future frames to locate future motion. The following section details how the proposed method learns future locations in the form of heatmaps.

4.1 Trajectory Prediction Network

To effectively identify the pixel-level probability map, we specifically design a trajectory prediction network with a set of deconvolutional layers. Details of the network architecture are described in the supplementary material. We first reshape the relational features extracted from GRE to be the dimension before running the proposed trajectory prediction network (TPN). The reshaped features are then incrementally upsampled using six deconvolutional layers, each with a subsequent ReLU activation function. As an output, the network predicts a set of activations in the form of heatmaps through the learned parameters . At training time, we minimize the sum of squared error between the ground-truth heatmaps and the prediction , all over the 2D locations . The L2 loss is as follows: . Note that is generated using a Gaussian distribution with a standard deviation (1.8 in practice) on the ground-truth coordinates in a 2D image space. Throughout the experiments, we use heatmaps with which balances computational time, quantization error, and prediction accuracy from the proposed network structures.

Figure 5: Visual analysis of spatial refinement. The first row shows the predicted future locations from the vanilla trajectory prediction network as presented in Section 4.1. Heatmap predictions are ambiguous, and hence the trajectory is unrealistic. The second row shows the refined locations by considering spatial dependencies as in Section 4.2.

4.2 Refinement with Spatial Dependencies

The TPN described in the previous section is designed to output a set of heatmaps, where predicted heatmaps correspond to the future locations over time. In practice, however, the output trajectory is sometimes unacceptable for road users as shown in Figure 5. Our main insight for the cause of this issue is a lack of spatial dependencies [21, 32]222Although [21, 32] used the term for kinematic dependencies of human body joints, we believe future locations have similar spatial dependencies between adjacent locations as one follows the other. among heatmap predictions. Since the network independently predicts number of pixel-level probability maps, there is no constraint to enforce heatmaps to be spatially aligned across predictions. In the literature,  [21, 32] have shown that inflating receptive fields enables the network to learn implicit spatial dependencies in a feature space without the use of hand designed priors or specific loss function. Similarly, we design a spatial refinement network (SRN) with large kernels, so the network can make use of rich contextual information between the predicted locations.

We first extract intermediate activations from the TPN and let through a set of convolutional layers with stride 2 so that the output feature map to be the same size as (earlier activation of TPN). Then, we upsample the concatenated features using four deconvolutional layers followed by a and convolution. By using large receptive fields and increasing the number of layers, the network is able to effectively capture dependencies [32], which results in less confusion between heatmap locations. In addition, the use of a convolution enforces our refinement process to further achieve pixel-level correction in the filter space. See the supplementary material for structural details. Consequently, the output heatmaps with spatial dependencies between heatmap locations show improvement in prediction accuracy as shown in Figure 5.

To train our SRN together with optimizing the rest of the system, we define another L2 loss: . Then the total loss can be drawn as follows: . We observe that the loss weights properly optimize our SRN with respect to the learned TPN and GRE.

4.3 Uncertainty of Future Prediction

Forecasting future trajectory can be formulated as an uncertainty problem since several plausible trajectories may exist with the given information. Its uncertainty has been often addressed in the literature [16, 9, 25] by generating multiple prediction hypotheses. Specifically, these approaches mainly focus on building their system based on deep generative models such as variational autoencoders [16] and generative adversarial networks [9, 25]. As the prediction models are trained to capture the future trajectory distributions, they sample multiple trajectories from the learned data distributions with noise variations, addressing multi-modal predictions. Unlike these methods, the proposed approach is inherently deterministic and generates a single trajectory prediction. Thus, our framework technically embeds the uncertainty of future prediction by adopting Monte Carlo (MC) dropout.

Bayesian neural networks (BNNs) [5, 19] are considered to tackle the uncertainty333Uncertainty can be categorized into two types [6]: (i) epistemic caused by uncertainty in the network parameters and (ii) aleatoric captured by inherent noise. We focus on epistemic uncertainty in this paper. of the network’s weight parameters. However, the difficulties in performing inference in BNNs often led to perform approximations of the parameters’ posterior distribution. Recently, [7, 8] found that inference in BNNs can also be approximated by sampling from the posterior distribution of the deterministic network’s weight parameters using dropout. Given a dataset X XX and labels Y , the posterior distribution about the network’s weight parameters is as follows: . Since it cannot be evaluated analytically, a simple distribution which is tractable is instead used. In this way, the true model posterior can be approximated by minimizing the Kullback-Leibler divergence between and , which results in performing variational inference in Bayesian modeling [7]. Dropout variational inference is a practical technique [12, 13] to approximate variational inference using dropout at training time to update model parameters and at test time to sample from the dropout distribution . As a result, the predictive distribution with Monte Carlo integration is as follows:

(2)

where L is the number of samples with dropout at test time.

The MC sampling technique enables us to capture multiple plausible trajectories over the uncertainties of the learned weight parameters. For evaluation, however, we use the mean of samples as our prediction, which best approximates variational inference in BNNs as in Eqn. 2. The efficacy of the uncertainty embedding is visualized in Figure 6. We compute the variance of samples to measure the uncertainty (second row) and their mean to output future trajectory (third row). At training and test time, we use dropout after C6 (with drop ratio ) and C8 () of the spatial behavior encoder and fully connected layers () of the RGM, which seems reasonable to balance regularization and model accuracy.

Figure 6: The efficacy of the uncertainty embedding into our framework. We observe that the performance of our model (first row) can be improved with MC dropout (third row). The uncertainty is visualized in the second row.

5 Experiments

We use a public dataset (SDD [23]) to compare the performance of the proposed approach to the self-generated baselines as well as the state-of-the-art methods.

Method           1.0      2.0      3.0      4.0
Spatio-temporal Interactions
     RE_Conv2D           2.42 / 3.09      3.50 / 5.23      4.72 / 8.16      6.19 / 11.92
     RE_Conv3D           2.58 / 3.24      3.62 / 5.29      4.83 / 8.25      6.27 / 11.92
     RE_Conv2D+LSTM           2.51 / 3.19      3.54 / 5.08      4.60 / 7.54      5.81 / 10.52
     RE_Conv2D+Conv3D           2.36 / 2.99      3.33 / 4.80      4.37 / 7.26      5.58 / 10.27
Relation Gate Module
     GRE_Vanilla           1.85 / 2.41      2.77 / 4.27      3.82 / 6.70      5.00 / 9.58
Spatial Refinement
     GRE_Deeper           2.19 / 2.84      3.24 / 4.88      4.36 / 7.44      5.63 / 10.54
     GRE_Refine           1.71 / 2.23      2.57 / 3.95      3.52 / 6.13      4.60 / 8.79
Monte Carlo Dropout (Ours)
     GRE_MC-2           1.66 / 2.17      2.51 / 3.89      3.46 / 6.06      4.54 / 8.73
     GRE_MC-5           1.61 / 2.13      2.44 / 3.85      3.38 / 5.99      4.46 / 8.68
     GRE_MC-10           1.60 / 2.11      2.45 / 3.83      3.39 / 5.98      4.47 / 8.65
     GRE_MC-20           1.59 / 2.10      2.44 / 3.83      3.38 / 5.97      4.46 / 8.65
Table 1: Quantitative comparison (ADE / FDE in pixels) of our approach with the self-generated baselines using the SDD Dataset [23]. Note that we report our performance at 1 / 5 resolution as proposed in [16].

5.1 Dataset and Preprocessing

The SDD dataset is collected from a drone capturing top-down road scenes of different places. It contains the 2D bounding box coordinates of various road users (e.g., a person, skateboarder, bicyclist, cart, car, etc) with different motion and speed. We exclude outliers following the preprocessing step in [16]. As a result, we create 19.5 K instances444[16] might be more aggressively found those of unstabilized images, but we were not able to further remove outliers to match their number. to train and test our model. Next, we simply find a center coordinate of each bounding box and use it to locate a corresponding road user onto images. Note that all RGB images are resized to fit in a 256256 image template, and the corresponding center coordinates are rescaled to the 128128 pixel space. Finally, we generate ground-truth heatmaps of size 128128 using the rescaled center coordinates. At training and test time, we use 3.2 sec of past images and coordinates of the target road user as input and predict 4.0 sec of future frames as heatmaps . For evaluation, we first find a coordinate of a point with a maximum likelihood from each heatmap and further process the coordinates to be the same scale as original images. Then, the distance error between the ground-truth future locations and our predictions is calculated. We report our performance at 1 / 5 scale as proposed in [16].

5.2 Comparison to Baselines

We conduct extensive evaluations to verify our design choices. Table 1 quantitatively compares the self-generated baseline models by measuring average distance error (ADE) during a given time interval and final distance error (FDE) at a specific time frame in pixels.

Spatio-temporal interactions: Encoding spatio-temporal features from images is crucial to discover both human-human and human-space interactions, which makes our approach distinct from others. We first conduct ablative tests to demonstrate the rationale of using spatio-temporal representations for understanding the relational behavior of road users. For this, we compare four baselines555The baselines with a prefix RE_ do not employ the proposed gating process but assume equal importance of relations similarly to [27].: (i) RE_Conv2D which discovers only spatial interactions from past images using 2D convolutions; (ii) RE_Conv3D which extracts both spatial and temporal interactions using a well-known technique, 3D convolutions; (iii) RE_Conv2D+LSTM which first extracts spatial behavior using 2D convolutions and then build temporal interactions using LSTM; and (iv) RE_Conv2D+Conv3D where we infer spatio-temporal interactions as discussed in Section 3.1. As shown in the first section of Table 1, the performance of the RE_Conv2D+LSTM baseline is dramatically improved against RE_Conv2D by replacing the final convolutional layer with LSTM. The result indicates that discovering spatial behavior of road users and their temporal interactions is essential to learn descriptive relations. It is further enhanced by using 3D convolutions instead of LSTM, as RE_Conv2D+Conv3D achieves lower prediction error than does the RE_Conv2D+LSTM baseline. This comparison validates the rationale of our use of 2D and 3D convolutions together to model more discriminative spatio-temporal features from a given image sequence. Interestingly, the RE_Conv3D baseline shows similar performance to RE_Conv2D that is trained to extract only spatial information. For RE_Conv3D, we gradually decrease the depth size from to 1 through 3D convolutional layers for a consistent size of spatio-temporal features over all baselines. In this way, the network observes temporal information from nearby frames in the early convolutional layers. However, it might not propagate those local spatio-temporal features to the entire sequence in the late layers.

Relation gate module: To demonstrate the efficacy of the proposed RGM, we train an additional model GRE_Vanilla as a baseline which simply replaces the fully connected layers of RE_Conv2D+Conv3D with the proposed RGM pipeline. Note that we match its number of parameters to RE_Conv2D+Conv3D for a fair comparison. The second section of Table 1 validates the impact of the RGM, showing the improvements of both ADE and FDE by a huge margin in comparison to the RE_Conv2D+Conv3D baseline. The internal gating process of our RGM explicitly determines which objects are more likely to affect the future target motion and allows the network to focus on exploring their relations to the target road user based on the given context. The implication is that the use of the RGM is more beneficial for relational inference, and its generalization in other domains is being considered as our future work.

Spatial refinement: In addition to the qualitative evaluation in Figure 5, we quantitatively explore how the proposed spatial refinement process helps to produce more acceptable future trajectory. The GRE_Refine baseline is trained using the additional spatial refinement network on top of the GRE_Vanilla structure. In Table 1, GRE_Refine significantly outperforms GRE_Vanilla both in terms of ADE and FDE all over time. It validates that the proposed network effectively acquires rich contextual information about dependencies between future locations from initial activations in a feature space. To further validate the use of the separate SRN structure, we additionally design a single end-to-end network (GRE_Deeper), replacing the shallow TPN of GRE_Vanilla with larger receptive fields and adding more layers (D1-D2 and C18-C25). Its performance is even worse than GRE_Vanilla. The GRE_Deeper baseline experiences the difficulties in training, which can be interpreted as vanishing gradient. Thus, we conclude that the proposed approach with the separate SRN takes advantage of the intermediate supervision with two loss functions ( and ), preventing the vanishing gradient problem [32].

Monte Carlo dropout: To validate our uncertainty strategy for future trajectory forecast, we generate a set of GRE_MC baselines with a different suffix -L, where L denotes the number of samples drawn at test time. The fact that any GRE_MC-L baselines performs better than GRE_Refine certainly indicates the efficacy of the presented uncertainty embedding. By operating along with heatmap prediction, the presented approach eventually helps us to choose the points with the global maximum over the samples. Therefore, the experiments consistently show the decrease in error rate for both near and far future prediction. It is also worth noting that the use of more samples gradually increases the overall performance but introduces a bottleneck at some point as the error rate of GRE_MC-10 and GRE_MC-20 is not significantly improved from GRE_MC-5.

Figure 7: The proposed approach properly encodes (a) human-human and (b) human-space interactions by inferring relational behavior from a physical environment (highlighted by a dashed arrow). However, we sometimes fail to predict a future trajectory when a road user (c) unexpectedly changes the direction of its motion or (d) does not consider the interactions with an environment. (Color codes: Yellow - given past trajectory, Red - ground-truth, and Green - our prediction)
Figure 8: Illustrations of our prediction during complicated human-human interactions. (a) A cyclist () interacts with a person moving slow (). (b) A person () meets a group of people. (c) A cyclist () first interacts with another cyclist in front () and then considers the influence of a person (). The proposed approach socially avoids potential collisions.

5.3 Comparison with Literature

We quantitatively compare the performance of our models to the state-of-the-art methods using a publicly available SDD dataset [23]. Two different methods are used for fair comparisons, one from human-human interaction oriented approaches (S-LSTM [1]) and the other from human-space interaction oriented approaches (DESIRE666We use DESIRE-SI-IT0 Best which shows the best performance among those without using the oracle error metric. [16]). In Table 2, both ADE and FDE are examined from four different time steps. The results indicate that incorporating scene context is crucial to successful predictions as our methods and [16] show a lower error rate than that of [1]. Moreover, all of our models with GRE generally outperform [16], validating the robustness of the proposed spatio-temporal interactions encoding pipeline which is designed to discover the entire human-human and human-space interactions from local to global scales. Note that the effectiveness of our approach is especially pronounced toward far future predictions. As discussed in Section 2, the state-of-the-art methods including [1, 16] restrict human interactions to nearby surroundings and indeed overlook the influence of distant road structures, obstacles, and road users. However, the proposed approach do not limit the interaction boundary, which results in more accurate predictions toward the far future.

Method 1.0 2.0 3.0 4.0
S-LSTM [1] 1.93 / 3.38 3.24 / 5.33 4.89 / 9.58 6.97 / 14.57
DESIRE [16]     -   / 2.00     -   / 4.41     -   / 7.18     -   / 10.23
GRE_Vanilla 1.85 / 2.41 2.77 / 4.27 3.82 / 6.70 5.00 / 9.58
GRE_Refine 1.71 / 2.23 2.57 / 3.95 3.52 / 6.13 4.60 / 8.79
GRE_MC-2 1.66 / 2.17 2.51 / 3.89 3.46 / 6.06 4.54 / 8.73
GRE_MC-5 1.61 / 2.13 2.44 / 3.85 3.38 / 5.99 4.46 / 8.68
Table 2: Quantitative comparison (ADE / FDE in pixels) of our approach with the state-of-the-art methods [1, 16] using the SDD Dataset [23] at 1 / 5 resolution.

5.4 Qualitative Evaluation

Figure 7 qualitatively evaluates how inferred relations encourage our model to generate natural motion for the target with respect to the consideration of human-human interactions (7a) and human-space interactions (7b). Both cases clearly show that spatio-temporal relational inferences adequately constrain our future predictions to be more realistic. We also present prediction failures in Figure 7c where the road user suddenly changes course and 7d where the road user is aggressive to interactions with an environment. Extension to incorporate such human behavior is our next plan. In Figure 8, we specifically illustrate more complicated human-human interaction scenarios. As validated in these examples, the proposed approach visually infers relational interactions based on the potential influence of others toward the future motion of the target.

6 Conclusion

We proposed a relation-aware framework which aims to forecast future trajectory of road users. Inspired by the human capability of inferring relational behavior from a physical environment, we introduced a system to discover both human-human and human-space interactions. The proposed approach first investigates spatial behavior of road users and structural representations together with their temporal interactions. Given spatio-temporal interactions extracted from a sequence of past images, we identify pair-wise relations that have a high potential to influence the future motion of the target based on its past trajectory. To generate a future trajectory, we predict a set of pixel-level probability maps where the coordinate of each maximum likelihood corresponds to the future location. We further refine the results by considering spatial dependencies between initial predictions as well as the nature of uncertainty in future forecast. Evaluations show that the proposed framework is powerful as it achieves state-of-the-art performance.

References

  • [1] A. Alahi, K. Goel, V. Ramanathan, A. Robicquet, L. Fei-Fei, and S. Savarese. Social lstm: Human trajectory prediction in crowded spaces. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 961–971, 2016.
  • [2] A. Alahi, V. Ramanathan, and L. Fei-Fei. Socially-aware large-scale crowd forecasting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2203–2210, 2014.
  • [3] S. Ammoun and F. Nashashibi. Real time trajectory prediction for collision risk estimation between vehicles. In 2009 IEEE 5th International Conference on Intelligent Computer Communication and Processing, pages 417–422. IEEE, 2009.
  • [4] Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh. Realtime multi-person 2d pose estimation using part affinity fields. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1302–1310. IEEE, 2017.
  • [5] J. S. Denker and Y. Lecun. Transforming neural-net output levels to probability distributions. In Advances in neural information processing systems, pages 853–859, 1991.
  • [6] A. Der Kiureghian and O. Ditlevsen. Aleatory or epistemic? does it matter? Structural Safety, 31(2):105–112, 2009.
  • [7] Y. Gal and Z. Ghahramani. Bayesian convolutional neural networks with Bernoulli approximate variational inference. In 4th International Conference on Learning Representations (ICLR) workshop track, 2016.
  • [8] Y. Gal and Z. Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning, pages 1050–1059, 2016.
  • [9] A. Gupta, J. Johnson, L. Fei-Fei, S. Savarese, and A. Alahi. Social gan: Socially acceptable trajectories with generative adversarial networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), number CONF, 2018.
  • [10] I. Hasan, F. Setti, T. Tsesmelis, A. Del Bue, F. Galasso, and M. Cristani. Mx-lstm: Mixing tracklets and vislets to jointly forecast trajectories and head poses. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
  • [11] D. Helbing and P. Molnar. Social force model for pedestrian dynamics. Physical review E, 51(5):4282, 1995.
  • [12] A. Kendall, V. Badrinarayanan, and R. Cipolla. Bayesian segnet: Model uncertainty in deep convolutional encoder-decoder architectures for scene understanding. arXiv preprint arXiv:1511.02680, 2015.
  • [13] A. Kendall and R. Cipolla. Modelling uncertainty in deep learning for camera relocalization. In Proceedings-IEEE International Conference on Robotics and Automation, volume 2016, pages 4762–4769, 2016.
  • [14] Y. Kong and Y. Fu. Human action recognition and prediction: A survey. arXiv preprint arXiv:1806.11230, 2018.
  • [15] C.-P. Lam, C.-T. Chou, K.-H. Chiang, and L.-C. Fu. Human-centered robot navigation—towards a harmoniously human–robot coexisting environment. IEEE Transactions on Robotics, 27(1):99–112, 2011.
  • [16] N. Lee, W. Choi, P. Vernaza, C. B. Choy, P. H. Torr, and M. Chandraker. Desire: Distant future prediction in dynamic scenes with interacting agents. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 336–345, 2017.
  • [17] Y. Li, D. Tarlow, M. Brockschmidt, and R. Zemel. Gated graph sequence neural networks. arXiv preprint arXiv:1511.05493, 2015.
  • [18] W.-C. Ma, D.-A. Huang, N. Lee, and K. M. Kitani. Forecasting interactive dynamics of pedestrians with fictitious play. In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, pages 4636–4644. IEEE, 2017.
  • [19] D. J. MacKay. A practical bayesian framework for backpropagation networks. Neural computation, 4(3):448–472, 1992.
  • [20] A. Newell, K. Yang, and J. Deng. Stacked hourglass networks for human pose estimation. In European Conference on Computer Vision, pages 483–499. Springer, 2016.
  • [21] T. Pfister, J. Charles, and A. Zisserman. Flowing convnets for human pose estimation in videos. In Proceedings of the IEEE International Conference on Computer Vision, pages 1913–1921, 2015.
  • [22] A. Rasouli and J. K. Tsotsos. Joint attention in driver-pedestrian interaction: from theory to practice. arXiv preprint arXiv:1802.02522, 2018.
  • [23] A. Robicquet, A. Sadeghian, A. Alahi, and S. Savarese. Learning social etiquette: Human trajectory understanding in crowded scenes. In European conference on computer vision, pages 549–565. Springer, 2016.
  • [24] M. Rodriguez, J. Sivic, I. Laptev, and J.-Y. Audibert. Data-driven crowd analysis in videos. In ICCV 2011-13th International Conference on Computer Vision, pages 1235–1242. IEEE, 2011.
  • [25] A. Sadeghian, V. Kosaraju, A. Sadeghian, N. Hirose, and S. Savarese. Sophie: An attentive gan for predicting paths compliant to social and physical constraints. arXiv preprint arXiv:1806.01482, 2018.
  • [26] A. Sadeghian, F. Legros, M. Voisin, R. Vesel, A. Alahi, and S. Savarese. Car-net: Clairvoyant attentive recurrent network. In Proceedings of the European Conference on Computer Vision (ECCV), pages 151–167, 2018.
  • [27] A. Santoro, D. Raposo, D. G. Barrett, M. Malinowski, R. Pascanu, P. Battaglia, and T. Lillicrap. A simple neural network module for relational reasoning. In Advances in neural information processing systems, pages 4967–4976, 2017.
  • [28] F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini. The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61–80, 2009.
  • [29] H. Soo Park, J.-J. Hwang, Y. Niu, and J. Shi. Egocentric future localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4697–4705, 2016.
  • [30] J. J. Tompson, A. Jain, Y. LeCun, and C. Bregler. Joint training of a convolutional network and a graphical model for human pose estimation. In Advances in neural information processing systems, pages 1799–1807, 2014.
  • [31] A. Vemula, K. Muelling, and J. Oh. Social attention: Modeling attention in human crowds. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pages 1–7. IEEE, 2018.
  • [32] S.-E. Wei, V. Ramakrishna, T. Kanade, and Y. Sheikh. Convolutional pose machines. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4724–4732, 2016.
  • [33] W. Xu, J. Pan, J. Wei, and J. M. Dolan. Motion planning under uncertainty for on-road autonomous driving. In 2014 IEEE International Conference on Robotics and Automation (ICRA), pages 2507–2512. IEEE, 2014.
  • [34] Y. Xu, Z. Piao, and S. Gao. Encoding crowd interaction with deep neural network for pedestrian trajectory prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5275–5284, 2018.
  • [35] H. Xue, D. Q. Huynh, and M. Reynolds. Ss-lstm: A hierarchical lstm model for pedestrian trajectory prediction. In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 1186–1194. IEEE, 2018.
  • [36] T. Yagi, K. Mangalam, R. Yonetani, and Y. Sato. Future person localization in first-person videos. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
  • [37] K. Yamaguchi, A. C. Berg, L. E. Ortiz, and T. L. Berg. Who are you with and where are you going? In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 1345–1352, 2011.
  • [38] Y. Yao, M. Xu, C. Choi, D. J. Crandall, E. M. Atkins, and B. Dariush. Egocentric vision-based future vehicle localization for intelligent driving assistance systems. In IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2019.
  • [39] S. Yi, H. Li, and X. Wang. Understanding pedestrian behaviors from stationary crowd groups. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3488–3496, 2015.
  • [40] Q. Zhu. Hidden markov model for dynamic obstacle avoidance of mobile robot navigation. IEEE Transactions on Robotics and Automation, 7(3):390–397, 1991.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
366085
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description