Spatiotemporal Relationship Reasoning for Pedestrian Intent Prediction

Spatiotemporal Relationship Reasoning for Pedestrian Intent Prediction

Abstract

Reasoning over visual data is a desirable capability for robotics and vision-based applications. Such reasoning enables forecasting the next events or actions in videos. In recent years, various models have been developed based on convolution operations for prediction or forecasting, but they lack the ability to reason over spatiotemporal data and infer the relationships of different objects in the scene. In this paper, we present a framework based on graph convolution to uncover the spatiotemporal relationships in the scene for reasoning about pedestrian intent. A scene graph is built on top of segmented object instances within and across video frames. Pedestrian intent, defined as the future action of crossing or not-crossing the street, is very crucial piece of information for autonomous vehicles to navigate safely and more smoothly. We approach the problem of intent prediction from two different perspectives and anticipate the intention-to-cross within both pedestrian-centric and location-centric scenarios. In addition, we introduce a new dataset designed specifically for autonomous-driving scenarios in areas with dense pedestrian populations: the Stanford-TRI Intent Prediction (STIP) dataset. Our experiments on STIP and another benchmark dataset show that our graph modeling framework is able to predict the intention-to-cross of the pedestrians with an accuracy of 79.10% on STIP and 79.28% on Joint Attention for Autonomous Driving (JAAD) dataset up to one second earlier than when the actual crossing happens. These results outperform baseline and previous work. Please refer to http://stip.stanford.edu/ for the dataset and code.

spatiotemporal graphs, forecasting, graph neural networks, autonomous-driving.

I Introduction

While driving, humans take important and intuitive decisions to achieve safe and smooth navigation. These decisions are ramifications of sequences of actions and interactions with others in the scene. Human drivers can perceive the scene and anticipate if a pedestrian intends to cross the street or not. This is a simple, yet useful piece of information for deciding the next actions to take (\eg, slow down, speed up, or stop). It can be made possible through inferring the interdependent interactions among pedestrians and with other items in the scene, like vehicles or traffic lights. Machines, on the other hand, lack the ability to read human judgments with the subtle gestures and interactions they make. This makes autonomous vehicles very conservative and can be nauseating for the riders and revolting for others on the road.

Fig. 1: We propose a model for pedestrian intention prediction that could be integrated into a self-driving system. For a car equipped with a front-view camera that continuously captures imagery of the environment, our model parses the visual input into a pedestrian-centric graph to understand the relationships among observed entities. These relationships capture the spatiotemporal context of the surroundings, which is essential in predicting future behaviors of the pedestrian, as evidenced by our experimental results.

Developing algorithms that read pedestrian instincts and make judgments based on them requires reasoning about the objects in the scene and how they interact, \ie, visual relationships. Most previous works modeling pedestrians focus on pedestrian detection and tracking [15, 73, 63, 58, 40, 37, 75, 61] or behavior analysis [42, 30]. Although they have obtained convincing performance on several benchmarks, completion of such tasks is not enough for human-like driving. On the other hand, trajectory prediction [24, 49, 26, 23, 38, 67, 3, 48] addresses the problem to some extent by predicting the potential future position of the pedestrian. But predicting trajectories with high confidence long-enough into the future is a very challenging task as many different and subtle factors change the trajectories of pedestrians. Contrarily, pedestrian intents are high-level semantic cues that can dramatically influence autonomous driving systems. Intent can be defined as their future and immediate action of whether they will cross the street or not. Anticipation and early prediction of pedestrian intent will help to demonstrate safer and more smooth driving behaviors.

Recent work [4, 6, 13, 50, 43] introduced pedestrian intent prediction and have typically tackled the problem by observing pedestrian-specific features such as location, velocity, and pose. Although these cues can contribute to inferring pedestrian intent, they ignore context and pedestrian interactions with elements in the scene, such as other pedestrians, vehicles, traffic signs, lights, and environmental factors, \eg, zebra-crossings. We argue that such interactions can be uncovered through reasoning over objects relationships through time. Therefore, we explore graph-based spatiotemporal modeling.

In this paper, we propose an approach for the pedestrian intent prediction problem based on visual relationship reasoning (Fig. 1). To this end, we build a pedestrian-centric dynamic scene graph. We first generate instance segmentation of each frame in the video using a pre-trained off-the-shelf instance segmentation model [34]. We then extract features from each instance in the image and reason about the relationship between pairs of instances through graph convolution techniques. One graph is defined for each pedestrian instance testifying to his/her intent. The pedestrian node is connected to all other instance nodes as well as a context node, which aggregates all the contextual visual information. To model pedestrian actions and interactions with others through time, we connect pedestrian and context nodes between consecutive frames to further reasoning about the temporal relations. This spatiotemporal modeling allows for capturing the intrinsic scene dynamics encoding the sequence of human subtle judgments and actions, which are very important for inferring the intent. In addition, we study the problem from a different point of view, by building a location-centric graph. In this setting, we predict how likely it is that a pedestrian will show up in front of the autonomous vehicle in the near future. This is critically important knowledge for an autonomous driving agent and our visual relationship reasoning is capable of modeling it. With such spatiotemporal relationship reasoning, we obtain reasonable results for intent prediction, which outperforms all baseline and previous methods. Comprehensive description of the related work are provided in the supplement.

In summary, the contributions of this work are two-fold: (1) We model the problem of intent prediction via instance-level spatiotemporal relationship reasoning and adopt graph convolution techniques to uncover individual intent; (2) Our modeling involves observing the problem from two different perspectives of pedestrian-centric and location-centric settings, both of which are crucial for autonomous driving applications. In addition, We also introduce a new dataset specifically designed for intent prediction in vehicle-centric view scenes.

Ii Related Work

Pedestrian Detection and Tracking are basic steps for reasoning about the pedestrian intent. Previous work about vision-based pedestrian protection systems [15] provides a thorough investigation of such methods based on shallow learning. Recently, various deep learning methods are proposed for single-stage detection [40, 37], detection in a crowd [75, 61], and detection at the presence of occlusion [40, 77, 74]; all these methods obtain prominent accuracies for pedestrian detection. For pedestrian tracking, multi-person tracking methods [54, 20] are proposed to track every person in a crowded scene. Recently, tracking problems are simultaneously solved with pose estimation [21, 69, 65] and person re-identification [55, 47] in a multi-task learning paradigm. Given the obtained promising results, we take them for granted and investigate visual reasoning schemes to understand the intrinsic intent of the pedestrians.

Trajectory Prediction is another closely-related task for understanding the pedestrian intent. Recent works leverage human dynamics in different forms to predict trajectories. For instance, [42] proposes Gaussian Process Dynamical Models based on the action of pedestrians and [26] uses an intent function with speed, location, and heading direction as input to predict future directions. Other works incorporate environment factors into trajectory prediction [30, 24, 10, 23]. Some other works observe the past trajectories and predict the future. For instance, [49] combines inverse reinforcement learning and bi-directional RNN to predict future trajectories. Recently, [67] proposed a crowd interaction deep neural network to model the affinity between pedestrians in the feature space mapped by a location encoder and a motion encoder. A large body of trajectory prediction methods depends on top-down (bird’s eye) view. Among these works, Social LSTM [3] incorporates common sense rules and social interactions to predict the trajectories of all pedestrians. Social GAN [17] defines a spatial pooling for motion prediction. SoPhie [48] introduces an attentive GAN to predict individual trajectories leveraging the physical Constraints. Although obtained impressive results, these top-down methods pose limitations that make them inapplicable to egocentric applications of self-driving scenarios.

One can argue that if we can accurately predict the pedestrians future trajectories, we already know their intent. This is valid, but trajectory prediction is more complex and requires more annotations and supervision. In addition, it is not a well-defined problem as future trajectories are often very contingent and cannot be predicted long enough into the future with enough certainty. In contrast, we look at the intent of the pedestrians defined in terms of future actions (cross or not cross) based on reasoning over the relationship of the pedestrian(s) and other objects in the scene.

Pedestrian Intent Prediction is explored by only a few previous works. For instance, [4] uses LIDAR and camera data to predict pedestrian intent based on location and velocity. Bonnin \etal [6] use context information to calculate predefined crafted features for intent prediction. [33] proposes hierarchical movements to represent human action and predict human action from human appearance. [13] extracts features from pedestrian key-points, and integrates features of neighboring frames to predict whether the pedestrian will cross. In other works, [50] introduces a sequence model and [43], a concurrent work with us, a dataset for this task. These works only use features from human without the context information in the scene, while our model leverages temporal connected spatial graph to incorporate relations between objects in the scene to encode the dynamics context information. This facilitates realistic visual reasoning to infer the intent, even in complex scenes. Recent work [64, 62] consider context information but they require additional moralities or constraints, which is not common across datasets, such as depth modality for [62] and bird’s-eye view for [64]. Whereas we only use raw video frames as the input.

Action Anticipation and Early Prediction methods can be considered as the most relevant methodological ramifications of intent understanding. Among these works, [1, 52] learns models to anticipate the next action by looking at the sequence of previous actions. Other works build spatiotemporal graphs [46] for first-person action forecasting, or use object affordances [31] and reinforcement learning [9] for early action prediction. In contrast, instead of only looking at the data to build a data-driven forecasting model, we build an agent-centric model that can reason on the scene and estimate the likelihoods of crossing or not-crossing.

Scene Graph Parsing and Visual Reasoning Modeling spatial and temporal context with graph have been width explored recently. There are works focusing on toy datasets [5, 57]. In the case of real scene, scene graphs have been a topic of interest for understating the relationships between objects encoding rich semantic information about the scene [66]. The previous work generated scene graphs using global context [71], relationship proposal networks [68], conditional random fields [12], iterative message passing [66] or recurrent neural network [19]. Such graphs built on top of visual scenes were used for various applications, including image generation [25], action recognition [60], trajectory prediction [28] and visual question answering [56]. However, one of their main usages is reasoning about the scene, as they outline a structured representation of the image content. Among these works, [51] uses scene graphs for explainable and explicit reasoning with structured knowledge. Aditya \etal [2] use directed and labeled scene description graph for reasoning in image captioning, retrieval, and visual question answering applications. In another recent work, [11] introduces a method for globally reasoning over regional relations in a single image. In contrast to the previous work, we build agent-centric (\eg, pedestrian-centric) graphs to depict the scene from the agent’s point of view. We use a context node to cope with varying number of objects, which relaxes the constant graph size constraints required by several previous works [5, 57, 28, 22, 41, 53]. Furthermore, instead of creating one single scene graph, we build a graph for each time-point and connect the important nodes across different times to encode the temporal dynamics (denoted by temporal connections). We show that these two characteristics can reveal pedestrian intent through reasoning on the spatiotemporal sequence of visual data.

Iii Method

We propose a model that leverages the spatiotemporal context of the scene to make the prediction. Given a sequence of video frames observed in the past, the model first parses each frame into pedestrian and objects of interests, each of which is encoded to a feature vector (Section III-A). Then for each pedestrian, we construct a pedestrian-centric spatiotemporal graph using these features as node representation and produce a feature vector that encodes both scene context and the temporal history in the observed frame (Section III-B). Finally, an RNN is used to predict the behavior of the pedestrian (Section III-C). Fig. 2 shows an overview of the model.

Fig. 2: Overview of the model. With a sequence of input frames (), the model observes whether a pedestrian is crossing (frame in red) or not (frame in green) by capturing the relationship between the pedestrian and the surrounding. It then predicts the future crossing likelihood up to a certain temporal horizon (). Specially, this is achieved with four components: 1) Scene Parsing (Sec. III-A): the input video frames are first parsed into pedestrians (bounding boxes) and objects (binary masks); 2) Graph Convolution (Sec. III-B): a pedestrian-centric graph is built on each frame to connect a pedestrian with the surrounding; 3) Temporal Connection (Sec. III-C): the pedestrian nodes at each frame and frame-level representations are connected temporally; 4) Prediction (Sec. III-D): using the rich representation on observed frames, the model tries to predict the future crossing behavior of the pedestrian of interest. More details are described in Section III.

Iii-a Scene Parsing

Self-driving systems equipped with egocentric cameras often need to cope with noisy information coming from busy views. We hence remove irrelevant information by focusing only on the pedestrians, the vehicles, and certain related items in the environment. Table I provides a complete list of objects of interests. We crop out pedestrians using ground truth bounding boxes. The reason of using ground truth is that pedestrian detection can be considered a solved problem given recent progress on person detectors [45, 35, 44], and that we would like to better evaluate our proposed method of leveraging spatiotemporal context by isolating the graph component which we will introduce later. Similarly, we use off-the-shelf instance segmentation framework [34] trained on [39] to obtain the binary object masks. Then for each object, we crop out the union bounding box enclosing both the object and the pedestrian, such that the relative position of the object and the pedestrian can be preserved. Note that the appearance information is thrown away, since it is the location and movement of the object relative to the pedestrian that affects the pedestrian’s crossing behavior the most, rather than the exact appearance such as color and texture. This further simplifies the information that the model needs to handle, and was proven to be beneficial in the experiments. Finally, the cropped out pedestrian and the union binary masks are encoded using two separately tuned ResNet-18 backbone.

Category Objects
Vehicle bike, bus, car, caravan, motorcycle, trailer, truck, other
Road user bicyclist, motorcyclist, other riders.
Environments crosswalk (plain), crosswalk (zebra), traffic light.
TABLE I: List of objects of interest.

Iii-B Graph Convolution for Modeling Spatiotemporal Context

The main contribution of this work is to augment the prediction model with context information, including both spatial context from objects in the scene, as well as the temporal context from the history. We hence propose a pedestrian-centric spatiotemporal graph spanning both space and time.

Pedestrian-Centric Graph

We use a graph structure to make use of the context information. Intuitively, each pedestrian or object corresponds to a graph node and the edges reflects the relationship strength between the two connected nodes. We define the graph convolution operation [29] as:

(1)

where is a matrix whose rows are feature vectors for the graph nodes, is the trainable weight of a graph convolution layer, and is the adjacency matrix of the graph.

Since our goal is to predict the crossing behavior for each pedestrian, we model each pedestrian with a star graph centered at the pedestrian. We define the edges using information from both the spatial relationship between the pedestrian and objects, as well as the appearance of the pedestrian. The spatial relationship is a good indicator of the importance of an object; for example, distance of the objects with respect to the pedestrian can play an important role. The appearance of the pedestrian is also considered since it can serve as a strong cue to the pedestrian’s intent, which can often be inferred from the head orientation or gazing direction. For example, an object to which the pedestrian is giving high attention should be associated with a heavier edge weight.

Inspired by [72], we describe the spatial relationship with a length-8 vector , whose entries include the height and width of the union bounding box, and the differences between box corners and center: , for the upper left corner, , for the lower right corner, and , for the center. We then combine the spatial vector with the feature representation of the pedestrian and use them to calculate the edge weight. The edge weight for the object is

(2)

where is the feature vector for the pedestrian capturing the appearance, and is the feature vector for the binary mask for object . For a graph with object nodes, is a symmetric matrix whose entries are, assuming ,

(3)

where the node is the pedestrian and the other nodes correspond to the objects.

Location-Centric Prediction

We also consider the location-centric setting, where instead of predicting crossing behavior for each pedestrian, we predict the probability that there is someone crossing a designated area. Fig. 3 shows an illustration of the setting. At each moment, the model focuses on a trapezoid area which can be interpreted as the area that the current car will cover in the near future for which we are predicting. In this way, the problem of predicting whether some pedestrian of interests will cross the road in the next seconds can equivalently be translated to whether any pedestrians will enter the focused red area that the car will cover in seconds.

This setting is beneficial especially for busy scenes with numerous pedestrians, since instead of building a graph for each pedestrian, the model focuses only on those that may affect the driving behavior. It not only simplifies the computation but is also more relevant from a control point of view.

We also modify the pedestrian-centric graph to be location-centric, where the center node encodes the point of view of the ego-car, and the peripheral nodes are other vehicles in the surrounding, riders, traffic signs, as well as the pedestrians. Rather than computing pairwise relative spatial location as part of the edge weight, we embed the egocentric scene and the context objects (including human) into a common embedding space, on which we define the edge weight to be the inner product of the egocentric scene and an object constrained with a sigmoid function.

Fig. 3: Intent prediction under the location-centric setting, where crossing in the future is equivalently mapped to crossing in the distance at the moment. By focusing on the area that the egocar is going to cover in the near future (highlighted in red), we construct a star graph centering on the egocentric view, with objects and pedestrians in the scene being the nodes.

Iii-C Temporal Connection

The previous pedestrian-centric star graph is constructed on each frame. However, we would like the communication within the graph to consider also the temporal history. In order to encode temporal relation into the node features, we first connect the pedestrian nodes in each frame with a GRU. Modeling the temporal relation in the contextual objects would be slightly trickier since the number of objects present in each frame may vary. However, we do not explicitly draw temporal associations among objects in different frames, since when performed properly, graph convolution would guarantee that information is sufficiently communicated among the nodes, hence the temporal information can be captured by the pedestrian node and pass to the context objects. We will also experimentally show this in the ablation study.

Iii-D Prediction GRU

To leverage the spatiotemporal context, the model performs two layers of graph convolution on each observed frame, where the features for the pedestrian node and the context node are hidden states of the corresponding GRUs. The refined pedestrian and context feature vectors after graph convolution are then concatenated at each frame, and are aggregated by an additional GRU. The last hidden state of this additional GRU is then used for anticipating crossing behavior in the future, which is achieved by a designated prediction GRU.

Iv Experiments

In this section, we evaluate our method on two datasets and compare the results with a wide range of baselines. We examine different settings of model structure (ablation studies) as well as choices of the features. We also explore how long in the future can be predicted by expanding the temporal horizon.

Iv-a Datasets

Joint Attention Autonomous Driving (JAAD) Dataset

JAAD [32] dataset focuses on joint attention in the context of autonomous driving in everyday urban setting. It is designed to explore pedestrian and driver behaviors with labels of when the pedestrians cross. JAAD contains videos captured with a front-view camera under various scenes, weathers, and lighting conditions. There are 346 videos with 82032 frames, where the length of videos ranges from 60 frames to 930 frames. Each video may contain multiple pedestrians with ground truth labels on 2D locations in the frame and 9 actions. We report metrics only on the action of crossing, but all 9 action labels are used for model training. The dataset is split into 250 videos for training and the rest for testing, which correspond to 460 pedestrians and 253 pedestrians respectively. The crossing action is roughly balanced, with 44.30% frames labeled as crossing and 55.70% as non-crossing throughout the dataset.

Stanford-TRI Intent Prediction (STIP) Dataset

In this paper, we introduce a new dataset of driving scenes recorded in dense urban areas in the United States in California and Michigan, at 8 cities under various weather conditions. This dataset is created within the context of a collaboration between Stanford University and the Toyota Research Institute (TRI). The dataset contains 923.48 minutes (at 20 fps; 1,108,176 frames in total) of driving scenes with high quality () recordings. A total of over 350,000 pedestrian boxes were annotated manually at 2 fps. The total number of pedestrian tracks are over 25,000 with a median length of 4 seconds. Each annotated sequence contains video recordings of three cameras simultaneously (left, front, and right). One sample sequence is shown in Fig. 4 , and Table II compares STIP with other available autonomous driving datasets. Our dataset has the longest length taken in dense urban areas with largest number of annotated (and interpolated) frames.

Dataset Year Len (min) #Frames #Peds #Cams C/NC
KITTI [14] 2012 90 80,000 12,000 1
JAAD [32] 2017 46 82,000 337,000 1
BDD 100k [70] 2017 60,000 100,000 86,047 1
PedX [27] 2018 - 10,152 14,091 1
NuScenes [7] 2019 330 1,400,000 1,400,000 6
Ours (STIP) 2020 923.48 1,108,176 3,500,000 3
TABLE II: Comparison of STIP with other datasets. The columns indicate publication year of the dataset, total length of the videos in minutes, number of annotated frames, number of annotated (& interpolated) pedestrian instances, number of cameras, and finally if the dataset has cross/not cross (C/NC) annotations on each pedestrian at each frame. STIP has 350K annotated (3.5M interpolated) pedestrian instances.

Given these manually annotated frames, we used instance segmentation results and a JPDA [18] based tracker to interpolate. Post-processing was applied to constrain the tracks to the known, manually annotated, ground truth frames. The resulting trajectories are at 20 fps are prepared for release; however, in this work we report results on the manually annotated 2 fps data. For pedestrian intent prediction, as a preliminary study, we focus on the parts of videos where the car is around an intersection. We manually select front view cameras of 556 video segments which happen in busy intersections. These video segments are then translated to 2525 pedestrians in 102.37 minutes of videos for training, and 823 pedestrians in 23.43 minutes of videos for testing.

Fig. 4: One sample scene with left, front, and right camera views from the STIP dataset. All pedestrains in all views are annotated with bounding boxes and cross/not-cross labels (omitted from this figure for clarity).

Iv-B Baseline Models

The model takes in frames as past observation, and predict the per-frame probability of crossing for up to frames in the future. We report separately the performance averaged over the future frames, as well as the performance on the frame, which helps to estimate how the prediction performance scales as the temporal horizon increases. The following baseline models are used to compare with our proposed model:

Pose-based method: we compared with [13] which predicts the crossing behavior using pose key points. For each frame, the input is a vector containing 18 2D coordinates from the same 9 joints as in their paper, which are the neck, the left/right shoulder, the left/right hip, the left/right knee, and the left/right ankle. The joints are obtained from OpenPose [8] pretrained on COCO [36] without fine-tuning. We re-do the experiments on JAAD since we are using the newest version of JAAD, different from that used in [13] , and also since we use different numbers of observed and predicted frames.

Pedestrian Locomotion Forecasting (PLF) [38]: One of the strong baselines for intent prediction is to first predict the trajectories of the pedestrians and then use the predicted trajectory to classify the intent. We use two strong trajectory prediction methods of ‘Constant Velocity’ and PLF. We also use PLF to predict both the future pose and trajectory, based on which we then predict the intent. Our experimental settings are the same as those reported in [38].

1D CNN [1]: predicting the future crossing behavior can be considered as a type of action anticipation. We therefore also include the comparison with the 1D CNN module which predicts the future action labels directly from the observed action labels. This baseline model shares the same backbone as our detection model.

Temporal Segment Network (TSN) [59]: we take the model with BN-Inception backbone. pretrained on ImageNet. Since our experiments show that the pedestrian region is better than taking in the entire frame, we use the same pedestrian regions as input for the TSN.

Temporal Relation Network (TRN) [76]: we also take the model with BN-Inception backbone pretrained on ImageNet, and takes as input the pedestrian regions. We conduct experiments on both the single-scale and the multi-scale version, where the largest scale is set as in the multi-scale version as in the original paper. However, since TRN based its prediction on features from frame tuples, where no frame-wise prediction is available. We only report results on the last prediction setting, where only one final prediction is needed.

# Model Avg on 1-30 frames 30 frame
1 Constant velocity 70.08% 68.30%
2 PLF (traj.) [38] 69.82% 64.56%
3 Pose-based [13] 67.00% 67.00%
4 PLF (pose) [38] 71.36% 68.25%
5 1D CNN [1] 72.78% 69.65%
6 TSN [59] 67.18% 63.64%
7 TRN [76] - 63.74%
8 Ours 79.28% 76.98%
TABLE III: Accuracy comparison with baseline models: our model (row #8) outperforms state-of-the-art models based on trajectory prediction (rows #1&2), past pose (row #3), past and predicted future pose (row #4), action anticipation (row #5), and early action recognition (rows #6&7).

Iv-C Results on JAAD Dataset

Table III shows the results comparing our proposed model with the baselines, where our model outperforms the baselines by a large margin. One possible explanation is that most of the baseline models are geared towards action recognition, early action recognition, or action anticipation. Hence, they are not optimized for the task of predicting pedestrian intention.

The performance of the pose-based model was not as good as we would expect from [13], which we think may be caused by two reasons: first, the poses output by the pose estimator were not of decent quality due to a more challenging dataset. Compared to [13] which used JAAD v1, the current version of JAAD is of much larger size and more complex. Furthermore, [13] simplifies the task by leaving out the prediction for pedestrians with less than 60 pixels in width.

Ablation Study

We present the results on ablation experiments to analyze the contribution of each model component in isolation, as well as to select the most suitable design choice in terms of both the model structure and use of features.

Choice of Model Structure:

Graph-based \vs Concatenation: incorporating context information could take up a simple form by directly concatenating the feature vectors of the contextual objects and the pedestrian. Since the number of objects may vary across frames, we represent the context by encoding the union of the binary masks with the ResNet backbone. Note that we chose to encode the binary masks rather than the original frame in RGB, since the latter was shown to give poorer performances. These preliminary experiments help us decide on parsing the scene with a segmentor, which reduces the noises in the scene while preserving the semantic information and spatial relation. We use two GRUs to model temporal connectivity, one on pedestrian features and one on the concatenated frame-level features, which is the same setting as our full model so as to have a fair comparison. The results of this concatenation model are shown in the first row of Table IV.

Variation on graph structures: the variations are defined in terms of the number of convolution operations performed, and the connectivity of the adjacency matrix. We experimented with 0 to 3 convolutional layers, with shared layer parameters (Table IV row 2 to 5) and without weight sharing (Table IV rows 6 to 7). 0 convolution layer means directly classifying on the feature matrix in Eq. (1) rather than the refined feature matrix . Note, 0 convolution layer is different from concatenation. 0 layer still takes up a graphical structure with each object encoded separately, capturing the spatial relation with the pedestrian. The improvement from concatenation to 0 layer thus reflects the importance of spatial modeling.

The results suggest that a better blended graph (\ie, more graph convolution layers) generally gives better performance, as evidenced by the gap from 0 to 1 to 2 layers. However, the performance gain seems to saturate, since the performances for 2 and 3 layers are similar. We hence choose 2 layers for lighter computation. Note also that here the 2-layer and 3-layer models have the same number of learnable parameters since the layer weights are shared. One may consider increasing model capacity by removing the weight sharing, however the experiments suggest that this seems to lead to overfitting the data and hence degrade the performance. The problem of overfitting is also reflected in row 8, where we increase the model capacity by relaxing the structure a fully-connected one.

Choice of temporal connection: since we consider intention prediction as a sequence modeling problem, it is not surprising that temporal connection factors heavily in the model (row 9). In addition, removing the temporal connection among pedestrian nodes hurts the performance (row 10), which means it is important for the pedestrian node to maintain a temporal history. However, adding more temporal connection may not be always beneficial, as evidenced by row 11 where an additional GRU was introduced on aggregated context features. The reason might be that the pedestrian GRU alone would suffice when the graph nodes are communicated sufficiently and the addition of context GRU may introduce redundancy.

Choice of Features: Given a chosen model structure, we experiment with incorporating different information in the hope of finding a rich yet lightweight feature representation and report the results in Table V. The gap over the pedestrian-only variation (row 1) confirms the effectiveness of including context information. However, additionally adding into pedestrian pose did not help with the performance. This may be due to the quality of the poses, similar to the situation with the pose-only baseline (Table III row 1). Though theoretically the model can learn to ignore poorly predicted poses, in practice poses behave similarly to a source noise, making the learning task more challenging. It is also worth pointing out that the semantic labels of the objects may not be essential to the reasoning, as evidenced by row 4 of Table V. We hypothesize that this is because the change of relative position already contains information that indicates the object type.

# Model 1-30 frames 30 frame
1 Concat 76.96% 75.7%
2 0 layer 77.97% 76.38%
3 1 layer 78.26% 76.68%
4 2 layer (Ours) 79.28% 76.98%
5 3 layers 78.75% 77.28%
6 2 layers, no sharing 78.17% 76.83%
7 3 layers, no sharing 78.64% 76.83%
8 2 layers, FC 78.42% 76.53%
9 2 layer, no temporal 71.01% 69.96%
10 2 layer, no ped GRU 78.58% 76.68%
11 2 layer, add ctxt GRU 78.85% 78.62%
TABLE IV: Ablation study on graph design: we compare different design choices by varying: 1) number of graph convolution layers; 2) whether to use weight sharing across layers or not (”no sharing”); 3) whether to use a fully connected graph; 4) ways introduce temporal relation (”no temporal / pedestrian GRU”). We also compare with a structure that simply concatenates the features for the pedestrian and the context (”Concat”), which is often a strong baseline. The prediction covers 30 frames / 1 second into the future.
# Model 1-30 frames 30 frame
1 Graph - Pedestrian 77.81% 76.32%
2 G - Ped + ctxt (Ours) 79.28% 76.98%
3 G - Ped + ctxt + pose 76.11% 74.14%
4 G - Ped + ctxt + objCls 78,21% 76.14%
TABLE V: Ablation study on the choice of features: given fixed optimal structure, we test different feature choices. Adding context information shows to be useful. The prediction covers 30 frames, 1 second in the future (Section IV-C1).

Extending the Temporal Horizon

Here, we would study how the prediction performance changes as we extend the temporal horizon. In addition to previous results on 30 frames, we extend prediction to 60 and 90 frames in this subsection, which corresponds to 2 and 3 seconds into the future.

Table VI and Fig. 5 summarize the results. In general, prediction improves as more observations come in, as evidenced by the increasing curves in the left-most region in Fig. 5. The turning point occurs when switching into prediction, where both the accuracy and confidence (the uncalibrated probability calculated as the sigmoid of the logits, the direct output of the last linear layer in the classifier) decrease as the temporal horizon grows. The best accuracy at each region is achieved by models trained with that specific setting. However, it is interesting that the confidence may not be consistent with the performance; for example, the 30-frame model gets the best performance on the first 30 future frames while being overall least confident about these predictions. We gain more certainty when there are more frames of data available. As the 60 and 90 frame models receive more data and supervision they can infer about the intent more confidently.

Fig. 5: Prediction accuracy (gray shade) and confidence (light teal shade) as time horizon increases. We analyze three models which are trained to predict for up to 30, 60, and 90 frames into the future, each taking 1 second (frame-rate = 30 FPS) of observation as input. Specifically, “x frames (acc/conf)” refers to the accuracy or confidence curve for the model trained to predict over future frames.
Length Avg on 1- frames On frame
30 frames (1s) 79.28% 76.98%
60 frames (2s) 75.10% 73.09%
90 frames (3s) 71.72% 68.31%
TABLE VI: Accuracy compared at different prediction lengths. The predictions are over 30, 60, and 90 frames, or equivalently, 1, 2, or 3 seconds in the future.

Results of Location-Centric Prediction

We restructure the dataset and obtain 32 video clips with 72882 frames, of which 14808 frames (around 20.3%) contains the crossing behavior. The number of pedestrians per frame ranges from 0 to 9, with an average of 1.76 pedestrians. We use the same framework proposed in Section III-B as a lightweight learning scheme for prediction in the location-centric scenario.

Similar to the pedestrian-centric setting, we compare with a concatenation baseline to demonstrate the effectiveness of the graph structure, with results shown in Table VII. However, in this case we only train the concatenation model for one epoch, as opposed to experiments in Table IV where the model was trained to converge. This one epoch can be considered as a pretraining stage, and the location-centric graph then operates on the features extracted from the pretrained concatenation model. Note that the graph model is lighter to train with features as input, consuming about one-tenth of GPU memory and taking about one-fifth of time to complete an epoch.

# Model Avg on 1-30 frames On 30 frame
1 Concat 74.13% 71.74%
2 Graph 86.38% 81.88%
TABLE VII: Results for the location-centric setting. With a pretrained concatenation model, the location-centric graph is able to continue the task learning in a memory and computation efficient manner. The prediction covers 30 frames, \ie, 1 second in the future.

Iv-D Results on STIP Dataset

Similar to the settings on the JAAD dataset, we examine how the performance of our model scales as the time horizon increases. Possibly due to denser pedestrian appearances, the baseline concat model encountered memory issue with longer time horizon. We therefore train the concat model under a setting with short observing and predicting time, and use its weights to initialize the graph model as before. The graph model is then fine-tuned to predict for longer temporal horizon into the future. The results are reported in Table VIII. With the data from only the front camera, predicting for 3 seconds in the future achieved slightly better accuracy than predicting for 2 seconds. However, the confidence of predictions at each step decreases monotonically overtime. A different trend is observed when all three cameras are used. In this case, objects from all three cameras are considered in a single graph, with the locations for side-camera objects adjusted by shifting them to the left or right (shifting the coordinates) based on the view that the objects are from. When using all three cameras, having observed 2 seconds predicts no worse than having observed 4 seconds. This is in accordance with the assumption that the side cameras give a wider view of the scene and provide better cues for relationship reasoning to predict pedestrian intent, which is also verified by the boost in performance across most of the settings.

Length Avg Accuracy on predicted frames
Front Camera All 3 Cameras
2s 1s 78.68% 81.20%
2s 2s 78.09% 80.49%
2s 3s 78.16% 80.77%
4s 1s 80.36% 81.53%
4s 2s 80.06% 81.73%
4s 3s 80.32% 79.62%
TABLE VIII: Accuracy compared at different prediction lengths on STIP dataset. Our model takes in 2 or 4 seconds of observation, and predicts for 1, 2, or 3 seconds into the future. Second column only reports results of spatiotemporal reasoning on a graph built from the front camera, and the last column shows results of all three cameras (left, front, & right).

V Conclusion

In this paper, we proposed a method based on graph convolution to model the spatiotemporal relationships of the pedestrians and other objects in the scene. We build our spatiotemporal graph by considering each segmented instance in a frame as a node. Pedestrian-centric and location-centric graphs are built and extracted features for each graph at each time point is fed into a gated recurrent unit. We use this model for predicting the pedestrians intention, which is defined as the future actions of cross or not-cross. We also introduced a dataset, STIP, tailored for intent prediction in dense driving scenes. The results show that our spatiotemporal relationship reasoning model can predict the intention with an accuracy of over 80% on STIP and a bit less than 80% on JAAD about one second ahead of the time that the actual crossing happens. As a direction for future work, further improvements over the spatiotemporal reasoning framework can be obtained by incorporating more intermediate annotations as well as a probability calibration method [16] for obtaining reliable confidence scores for the predictions.

Acknowledgments

This research was supported by the Toyota Research Institute (TRI). This article solely reflects the opinions and conclusions of its authors and not TRI or any other Toyota entity. The authors would like to thank Karttikeya Mangalam for helping with obtaining results of the baseline methods.

References

  1. Y. Abu Farha, A. Richard and J. Gall (2018) When will you do what?-anticipating temporal occurrences of activities. In CVPR, pp. 5343–5352. Cited by: §II, §IV-B, TABLE III.
  2. S. Aditya, Y. Yang, C. Baral, Y. Aloimonos and C. Fermüller (2018) Image understanding using vision and reasoning through scene description graph. CVIU 173, pp. 33–45. Cited by: §II.
  3. A. Alahi, K. Goel, V. Ramanathan, A. Robicquet, L. Fei-Fei and S. Savarese (2016) Social LSTM: human trajectory prediction in crowded spaces. In CVPR, pp. 961–971. Cited by: §I, §II.
  4. T. Bandyopadhyay, C. Z. Jie, D. Hsu, M. H. Ang, D. Rus and E. Frazzoli (2013) Intention-aware pedestrian avoidance. In Experimental Robotics, pp. 963–977. Cited by: §I, §II.
  5. P. Battaglia, R. Pascanu, M. Lai and D. J. Rezende (2016) Interaction networks for learning about objects, relations and physics. In NeurIPS, pp. 4502–4510. Cited by: §II.
  6. S. Bonnin, T. H. Weisswange, F. Kummert and J. Schmüdderich (2014) Pedestrian crossing prediction using multiple context-based models. In ITSC, pp. 378–385. Cited by: §I, §II.
  7. H. Caesar, V. Bankiti, A. H. Lang, S. Vora, V. E. Liong, Q. Xu, A. Krishnan, Y. Pan, G. Baldan and O. Beijbom (2019) nuScenes: a multimodal dataset for autonomous driving. arXiv preprint arXiv:1903.11027. Cited by: TABLE II.
  8. Z. Cao, G. Hidalgo, T. Simon, S. Wei and Y. Sheikh (2018) OpenPose: realtime multi-person 2D pose estimation using Part Affinity Fields. In arXiv preprint arXiv:1812.08008, Cited by: §IV-B.
  9. L. Chen, J. Lu, Z. Song and J. Zhou (2018) Part-activated deep reinforcement learning for action prediction. In ECCV, pp. 421–436. Cited by: §II.
  10. Y. F. Chen, M. Liu and J. P. How (2016) Augmented dictionary learning for motion prediction. In ICRA, pp. 2527–2534. Cited by: §II.
  11. Y. Chen, M. Rohrbach, Z. Yan, S. Yan, J. Feng and Y. Kalantidis (2019) Graph-based global reasoning networks. In CVPR, Cited by: §II.
  12. W. Cong, W. Wang and W. Lee (2018) Scene graph generation via conditional random fields. arXiv preprint arXiv:1811.08075. Cited by: §II.
  13. Z. Fang and A. M. López (2018) Is the pedestrian going to cross? answering by 2D pose estimation. In 2018 IEEE Intelligent Vehicles Symposium (IV), pp. 1271–1276. Cited by: §I, §II, §IV-B, §IV-C, TABLE III.
  14. A. Geiger, P. Lenz, C. Stiller and R. Urtasun (2013) Vision meets robotics: the kitti dataset. IJRR 32 (11), pp. 1231–1237. Cited by: TABLE II.
  15. D. Gerónimo and A. M. López (2014) Vision-based pedestrian protection systems for intelligent vehicles. Springer. Cited by: §I, §II.
  16. C. Guo, G. Pleiss, Y. Sun and K. Q. Weinberger (2017) On calibration of modern neural networks. In ICML, pp. 1321–1330. Cited by: §V.
  17. A. Gupta, J. Johnson, L. Fei-Fei, S. Savarese and A. Alahi (2018) Social GAN: socially acceptable trajectories with generative adversarial networks. In CVPR, pp. 2255–2264. Cited by: §II.
  18. S. Hamid Rezatofighi, A. Milan, Z. Zhang, Q. Shi, A. Dick and I. Reid (2015) Joint probabilistic data association revisited. In ICCV, pp. 3047–3055. Cited by: §IV-A2.
  19. M. S. Ibrahim, S. Muralidharan, Z. Deng, A. Vahdat and G. Mori (2016) A hierarchical deep temporal model for group activity recognition. In CVPR, pp. 1971–1980. Cited by: §II.
  20. E. Insafutdinov, M. Andriluka, L. Pishchulin, S. Tang, E. Levinkov, B. Andres and B. Schiele (2017-07) ArtTrack: articulated multi-person tracking in the wild. In CVPR, Cited by: §II.
  21. U. Iqbal, A. Milan and J. Gall (2017-07) PoseTrack: joint multi-person pose estimation and tracking. In CVPR, Cited by: §II.
  22. A. Jain, A. R. Zamir, S. Savarese and A. Saxena (2016) Structural-RNN: deep learning on spatio-temporal graphs. In CVPR, pp. 5308–5317. Cited by: §II.
  23. N. Jaipuria, G. Habibi and J. P. How (2018) A transferable pedestrian motion prediction model for intersections with different geometries. arXiv preprint arXiv:1806.09444. Cited by: §I, §II.
  24. N. Japuria, G. Habibi and J. P. How (2017) CASNSC: a context-based approach for accurate pedestrian motion prediction at intersections. In NeurIPS, Cited by: §I, §II.
  25. J. Johnson, A. Gupta and L. Fei-Fei (2018) Image generation from scene graphs. In CVPR, pp. 1219–1228. Cited by: §II.
  26. V. Karasev, A. Ayvaci, B. Heisele and S. Soatto (2016) Intent-aware long-term prediction of pedestrian motion. In ICRA, pp. 2543–2549. Cited by: §I, §II.
  27. W. Kim, M. S. Ramanagopal, C. Barto, M. Yu, K. Rosaen, N. Goumas, R. Vasudevan and M. Johnson-Roberson (2019) PedX: benchmark dataset for metric 3-d pose estimation of pedestrians in complex urban intersections. IEEE RA-L 4 (2), pp. 1940–1947. Cited by: TABLE II.
  28. T. Kipf, E. Fetaya, K. Wang, M. Welling and R. Zemel (2018) Neural relational inference for interacting systems. arXiv preprint arXiv:1802.04687. Cited by: §II.
  29. T. N. Kipf and M. Welling (2016) Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Cited by: §III-B1.
  30. J. F. P. Kooij, N. Schneider, F. Flohr and D. M. Gavrila (2014) Context-based pedestrian path prediction. In ECCV, pp. 618–633. Cited by: §I, §II.
  31. H. S. Koppula and A. Saxena (2016) Anticipating human activities using object affordances for reactive robotic response. TPAMI 38 (1), pp. 14–29. Cited by: §II.
  32. I. Kotseruba, A. Rasouli and J. K. Tsotsos (2016) Joint attention in autonomous driving (jaad). arXiv preprint arXiv:1609.04741. Cited by: §IV-A1, TABLE II.
  33. T. Lan, T. Chen and S. Savarese (2014) A hierarchical representation for future action prediction. In ECCV, pp. 689–704. Cited by: §II.
  34. J. Li, A. Raventos, A. Bhargava, T. Tagawa and A. Gaidon (2018) Learning to fuse things and stuff. arXiv preprint arXiv:1812.01192. Cited by: §I, §III-A.
  35. T. Lin, P. Goyal, R. Girshick, K. He and P. Dollár (2017) Focal loss for dense object detection. In CVPR, pp. 2980–2988. Cited by: §III-A.
  36. T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár and C. L. Zitnick (2014) Microsoft coco: common objects in context. In ECCV, pp. 740–755. Cited by: §IV-B.
  37. W. Liu, S. Liao, W. Hu, X. Liang and X. Chen (2018-09) Learning efficient single-stage pedestrian detectors by asymptotic localization fitting. In ECCV, Cited by: §I, §II.
  38. K. Mangalam, E. Adeli, K. Lee, A. Gaidon and J. C. Niebles (2020) Disentangling human dynamics for pedestrian locomotion forecasting with noisy supervision. In WACV, Cited by: §I, §IV-B, TABLE III.
  39. G. Neuhold, T. Ollmann, S. Rota Bulo and P. Kontschieder (2017) The mapillary vistas dataset for semantic understanding of street scenes. In ICCV, pp. 4990–4999. Cited by: §III-A.
  40. J. Noh, S. Lee, B. Kim and G. Kim (2018-06) Improving occlusion and hard negative handling for single-stage pedestrian detectors. In CVPR, Cited by: §I, §II.
  41. S. Qi, W. Wang, B. Jia, J. Shen and S. Zhu (2018) Learning human-object interactions by graph parsing neural networks. In ECCV, pp. 401–417. Cited by: §II.
  42. R. Quintero, I. Parra, D. F. Llorca and M. Sotelo (2014) Pedestrian path prediction based on body language and action classification. In ITSC, pp. 679–684. Cited by: §I, §II.
  43. A. Rasouli, I. Kotseruba, T. Kunic and J. K. Tsotsos (2019) PIE: a large-scale dataset and models for pedestrian intention estimation and trajectory prediction. In ICCV, pp. 6262–6271. Cited by: §I, §II.
  44. J. Redmon and A. Farhadi (2018) Yolov3: an incremental improvement. arXiv preprint arXiv:1804.02767. Cited by: §III-A.
  45. S. Ren, K. He, R. Girshick and J. Sun (2015) Faster R-CNN: towards real-time object detection with region proposal networks. In NeurIPS, pp. 91–99. Cited by: §III-A.
  46. N. Rhinehart and K. M. Kitani (2017) First-person activity forecasting with online inverse reinforcement learning. In ICCV, pp. 3696–3705. Cited by: §II.
  47. E. Ristani and C. Tomasi (2018-06) Features for multi-target multi-camera tracking and re-identification. In CVPR, Cited by: §II.
  48. A. Sadeghian, V. Kosaraju, A. Sadeghian, N. Hirose and S. Savarese (2018) Sophie: an attentive gan for predicting paths compliant to social and physical constraints. arXiv preprint arXiv:1806.01482. Cited by: §I, §II.
  49. K. Saleh, M. Hossny and S. Nahavandi (2018) Long-term recurrent predictive model for intent prediction of pedestrians via inverse reinforcement learning. In DICTA, pp. 1–8. Cited by: §I, §II.
  50. Y. Seo, M. Defferrard, P. Vandergheynst and X. Bresson (2018) Structured sequence modeling with graph convolutional recurrent networks. In International Conference on Neural Information Processing, pp. 362–373. Cited by: §I, §II.
  51. J. Shi, H. Zhang and J. Li (2018) Explainable and explicit visual reasoning over scene graphs. In AAAI, Cited by: §II.
  52. Y. Shi, B. Fernando and R. Hartley (2018) Action anticipation with rbf kernelized feature mapping RNN. In ECCV, pp. 301–317. Cited by: §II.
  53. T. Shu, S. Todorovic and S. Zhu (2017) CERN: confidence-energy recurrent network for group activity recognition. In CVPR, pp. 5523–5531. Cited by: §II.
  54. S. Tang, B. Andres, M. Andriluka and B. Schiele (2016) Multi-person tracking by multicut and deep matching. In ECCV, pp. 100–111. Cited by: §II.
  55. S. Tang, M. Andriluka, B. Andres and B. Schiele (2017-07) Multiple people tracking by lifted multicut and person re-identification. In CVPR, Cited by: §II.
  56. D. Teney, L. Liu and A. van den Hengel (2017) Graph-structured representations for visual question answering. In CVPR, pp. 1–9. Cited by: §II.
  57. S. Van Steenkiste, M. Chang, K. Greff and J. Schmidhuber (2018) Relational neural expectation maximization: unsupervised discovery of objects and their interactions. arXiv preprint arXiv:1802.10353. Cited by: §II.
  58. D. Vazquez, A. M. Lopez, J. Marin, D. Ponsa and D. Geronimo (2014) Virtual and real world adaptation for pedestrian detection. TPAMI 36 (4), pp. 797–809. Cited by: §I.
  59. L. Wang, Y. Xiong, Z. Wang, Y. Qiao, D. Lin, X. Tang and L. Val Gool (2016) Temporal segment networks: towards good practices for deep action recognition. In ECCV, Cited by: §IV-B, TABLE III.
  60. X. Wang and A. Gupta (2018) Videos as space-time region graphs. In ECCV, pp. 399–417. Cited by: §II.
  61. X. Wang, T. Xiao, Y. Jiang, S. Shao, J. Sun and C. Shen (2018-06) Repulsion loss: detecting pedestrians in a crowd. In CVPR, Cited by: §I, §II.
  62. P. Wei, Y. Liu, T. Shu, N. Zheng and S. Zhu (2018) Where and why are they looking? jointly inferring human attention and intentions in complex tasks. In CVPR, pp. 6801–6809. Cited by: §II.
  63. N. Wojke, A. Bewley and D. Paulus (2017) Simple online and realtime tracking with a deep association metric. In ICIP, pp. 3645–3649. Cited by: §I.
  64. D. Xie, T. Shu, S. Todorovic and S. Zhu (2017) Learning and inferring “dark matter” and predicting human intents and trajectories in videos. TPAMI 40 (7), pp. 1639–1652. Cited by: §II.
  65. Y. Xiu, J. Li, H. Wang, Y. Fang and C. Lu (2018) Pose flow: efficient online pose tracking. CoRR abs/1802.00977. External Links: Link, 1802.00977 Cited by: §II.
  66. D. Xu, Y. Zhu, C. B. Choy and L. Fei-Fei (2017) Scene graph generation by iterative message passing. In CVPR, pp. 5410–5419. Cited by: §II.
  67. Y. Xu, Z. Piao and S. Gao (2018-06) Encoding crowd interaction with deep neural network for pedestrian trajectory prediction. In CVPR, Cited by: §I, §II.
  68. J. Yang, J. Lu, S. Lee, D. Batra and D. Parikh (2018) Graph r-cnn for scene graph generation. In ECCV, pp. 670–685. Cited by: §II.
  69. D. Yu, K. Su, J. Sun and C. Wang (2018) Multi-person pose estimation for pose tracking with enhanced cascaded pyramid network. In ECCV, pp. 221–226. Cited by: §II.
  70. F. Yu, W. Xian, Y. Chen, F. Liu, M. Liao, V. Madhavan and T. Darrell (2018) BDD100k: a diverse driving video database with scalable annotation tooling. arXiv preprint arXiv:1805.04687. Cited by: TABLE II.
  71. R. Zellers, M. Yatskar, S. Thomson and Y. Choi (2018) Neural motifs: scene graph parsing with global context. In CVPR, pp. 5831–5840. Cited by: §II.
  72. K. Zeng, S. Chou, F. Chan, J. Carlos Niebles and M. Sun (2017) Agent-centric risk assessment: accident anticipation and risky region localization. In CVPR, pp. 2222–2230. Cited by: §III-B1.
  73. S. Zhang, R. Benenson, M. Omran, J. Hosang and B. Schiele (2018) Towards reaching human performance in pedestrian detection. TPAMI 40 (4), pp. 973–986. Cited by: §I.
  74. S. Zhang, J. Yang and B. Schiele (2018-06) Occluded pedestrian detection through guided attention in cnns. In CVPR, Cited by: §II.
  75. S. Zhang, L. Wen, X. Bian, Z. Lei and S. Z. Li (2018-09) Occlusion-aware R-CNN: detecting pedestrians in a crowd. In ECCV, Cited by: §I, §II.
  76. B. Zhou, A. Andonian, A. Oliva and A. Torralba (2018) Temporal relational reasoning in videos. In ECCV, pp. 803–818. Cited by: §IV-B, TABLE III.
  77. C. Zhou and J. Yuan (2018-09) Bi-box regression for pedestrian detection and occlusion estimation. In ECCV, Cited by: §II.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
408867
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description