Dynamic Spatiotemporal Graphbased CNNs for Traffic Flow Prediction
Abstract
Forecasting future traffic flows from previous ones is a challenging problem because of their complex and dynamic nature of spatiotemporal structures. Most existing graphbased CNNs attempt to capture the static relations while largely neglecting the dynamics underlying sequential data. In this paper, we present dynamic spatiotemporal graphbased CNNs (DSTGCNNs) by learning expressive features to represent spatiotemporal structures and predict future traffic flows from surveillance video data. In particular, DSTGCNN is a two stream network. In the flow prediction stream, we present a novel graphbased spatiotemporal convolutional layer to extract features from a graph representation of traffic flows. Then several such layers are stacked together to predict future flows over time. Meanwhile, the relations between traffic flows in the graph are often time variant as the traffic condition changes over time. To capture the graph dynamics, we use the graph prediction stream to predict the dynamic graph structures, and the predicted structures are fed into the flow prediction stream. Experiments on real datasets demonstrate that the proposed model achieves competitive performances compared with the other stateoftheart methods.
Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000. \doi10.1109/ACCESS.2017.DOI
raph Neural Networks, Traffic Forecasting, Time Series Regression
=15pt
1 Introduction
\PARstartThe goal of traffic flow forecasting is to predict the future traffic flows based on previous flows measured by sensors, which is one of the most challenging problems in Intelligent Transportation System (ITS). In the context of traffic flow forecasting, “traffic flows” or “traffic volumes” mean the number of cars recorded by a sensor network in a period of time. Accurate traffic forecasting can enable individuals and policy makers to make decisions on route planning and traffic control.
Advanced algorithms that can model the interactions between dynamic traffic flows are required to predict their future trends. In literature, datadriven approaches have attracted many research attentions. For example, statistical methods such as autoregressive integrated moving average (ARIMA)[davis1990adaptive] and its variants [williams2003modeling] are well studied. The performance of such methods are limited because their capacity is insufficient to model the complex nonlinear dependency among traffic flows in either spatial or temporal contexts. Recently, deep learning methods have shown promising results in dynamic prediction over sequential data, including stacked autoencoder (SAE) [lv2015traffic], DBN [huang2014deep], LSTM [dai2017deeptrend] and CNN [zhang2016deep]. Although these methods made some progress in modeling complex patterns in sequential data, they have not yet fully explored both spatial and temporal structures of traffic flows in an integrated fashion.
Several methods [li2017graph, yu2017spatio] attempt to model the traffic flows by unrolling static graphs through time where each vertex denotes the reading of flows at a given location and edges represent how the flows at two locations would affect each other. These works show that the graph structure is capable of describing the spatiotemporal dependency between flows. However, they usually have to assume that the graph structures, especially the relations between flows at different locations, do not change over time. It implies traffic conditions are timeinvariant , which is not true in the real world.
To address this problem, we propose a dynamic spatiotemporal graph based CNN (DSTGCNN), which can model both the dynamics of traffic flows and their correlations. The contributions of this paper are threefold.

We propose a novel spatiotemporal graphbased convolutional layer that is able to jointly extract both spatio and temporal information from the traffic flow data. This layer consists of two factorized convolutions applied to spatial and temporal dimensions respectively, which significantly reduces computations and can be implemented in a parallel way. Then, we build a hierarchy of stacked graphbased convolutional layers to extract expressive features and make traffic flow predictions.

We will also learn the evolving graph structures that can adapt to the fastchanging traffic conditions over time. The learned graph structures can be seamlessly integrated with the stacked graphbased convolutional layers to make accurate traffic flow predictions.

We evaluate the proposed model on both traffic video dataset and the public Beijing taxi dataset. Experimental results demonstrate that DSTGCNN outperforms the stateoftheart methods.
2 Related Work
The study of traffic flow forecasting can trace back to 1970s [larry1995event]. From then on, a large number of methods have been proposed, and a recent survey paper comprehensively summarizes the methods [vlahogianni2014short]. Early methods were often based on simulations, which were computationally demanding and required careful tuning of model parameters. With modern realtime traffic data collection systems, datadriven approaches have attracted more research attentions. In statistics, a family of autoregressive integrated moving average (ARIMA) models [davis1990adaptive] are proposed to predict traffic flows. However, these autoregressive models rely on the stationary assumption on sequential data, which fails to hold in real traffic conditions that vary over time. In [hoang2016fccf], Intrinsic Gaussian Markov Random Fields (IGMRF) are developed to model both the season flows and the trend flows, which is shown to be robust against noise and missing data. Some conventional learning methods including Linear SVR [jin2007simultaneously] and random forest regression [leshem2007traffic] have also been tailored to solve traffic flow prediction problem. Nevertheless, these shallow models depend on handcrafted features and can not fully explore complex spatiotemporal patterns among the big traffic data, which greatly limits their performances.
With the development of deep learning, various network architectures have been proposed for predicting traffic flows. Early attempts include SAE [lv2015traffic] and DBN [huang2014deep], but neither are effective in modeling the spatiotemporal dependency between traffic flows. To capture the short and long temporal dependency, LSTMs are used to model the evolution of traffic flows [dai2017deeptrend]. However, the typical LSTM model is unable to model the spatial correlations which play an important role in making a spatially coherent prediction on the traffic flows. To close this gap, hybrid models where temporal models such as LSTM and GRU are combined with spatial models like 1DCNN [wu2016short] and graphs [li2017graph] are proposed and achieve impressive performances. Nevertheless, recurrent models are restricted to process sequential data successively oneafterone, which limits the parallelization of underlying computations. In contrast, the proposed model utilizes convolutions to capture both spatial and temporal data dependencies, which can reach much more efficiency than the compared recurrent models.
Two recent stateoftheart methods are DCRNN [li2017diffusion] and STGCN [yu2017spatio]. They show appealing results on public datasets. But DCRNN is less efficient as it involves recurrent feedforward. STCGN use fixed affinity matrix that is not suitable for dynamic traffic environments. In contrast, our paper provides an efficient and dynamic method for better traffic prediction.
3 Preliminaries
3.1 Structured Information Extraction
To collect traffic flow data, multiple sensors including loop detectors [jagadish2014big], radar [yu2017spatio] and GPS trajectories[hoang2016fccf] can be leveraged. However, they are either incomplete or unable to capture finegrained trajectory flows of individual vehicles. In contrast, we propose in this paper to use surveillance videos to record and predict traffic flows. Thanks to the fast development of computer vision technologies, we can not only track individual flows of vehicles but also link their identities across different cameras over time.
In specific, we first use SSD [liu2016ssd] and KCF [henriques2015high] to identify and track vehicle instances in videos. Then the plate numbers are recognized so that the same vehicles can be targeted across cameras. For each instance, we record the detected time, the camera location and its plate number, which are referred to as structured information in this paper. With them, at each location, we can count traffic volumes in a period of time. Meanwhile, by tracking plate numbers, we can estimate the average time to travel from one location to another. The computed traffic volumes and travel time are the inputs of our method.
It is worth noting that estimating the travel time across different locations does not require to recognize the plate numbers for all vehicles. The estimated travel time only need to be computed over those with recognized plate numbers, which greatly simplifies the problem.
3.2 Mathematical Notations
Suppose at each time , we have volumes of traffic flows and travel time data, where is number of locations and is the number of input channels. The channels represent different directions of traffic volumes at a location. Our method uses previous traffic volumes to forecast future volumes after time steps. For simplicity, we use a tensor to denote . Without ambiguity, the subscript may be ignored in the rest of paper.
To model the complex dependency among traffic volumes, in DSTGCNN, we use an undirected graph to represent the traffic volumes, where the vertex set represents traffic volumes at different locations and the affinity matrix depicts the connectivity between vertices. We derive the affinity matrix from the travel time such that , where is the travel time between location and . Therefore, the historical traffic volumes can be represented as stacked graph frames.
4 Method
The proposed DSTGCNN framework can model the complex spatiotemporal dependency between traffic flows and the fastevolving traffic conditions. It takes three inputs: the previous traffic volumes represented as stacked graph frames, the previous traffic conditions represented as a series of affinity matrices and auxiliary information. Then these types of information are fed to a twostream network. The graph prediction stream predicts the traffic conditions while the flow prediction stream forecasts evolutions of traffic flows given the predicted traffic conditions. The overall architecture of DSTGCNN is presented in Figure 1. In the following subsections, we describe the two streams in details.
4.1 Flow Prediction Stream
In this subsection, we introduce the structure of the flow prediction stream that is the main subnetwork to perform prediction. First, we present the building block of this stream, which is a novel Spatiotemporal Graphbased Convolutional Layer (STC) that works with spatiotemporal graph data. Then we build a twostep hierarchical model using STC layers to predict traffic flows.
Spatiotemporal Graphbased Convolution
The CNN is a popular tool in computer vision as it is powerful to extract hierarchy features expressive in many highlevel recognition and prediction tasks. However, it cannot be directly applied to process the structured graph data like in our task. Therefore, we propose a novel layer that works with spatiotemporal graph data and is also as efficient as conventional convolutions.
Inspired by [howard2017mobilenets] that factorizes convolutions along two separate dimensions, we also present two factorized convolutions applied to spatial and temporal dimensions respectively, in a hope to reduce computational overhead. They form the proposed Spatiotemporal Graphbased Convolutional Layer (STC), whose structure is shown in Figure 2. The input to a STC layer contains a sequence of graph structured feature maps organized by their timestamps and channels. Each graph is first convolved spatially to extract its spatial feature representation, and then features of multiple graphs are fused by a temporal convolution in a sliding timewindow. In this way, both spatial and temporal information are merged to yield a dynamic feature representation for predicting future flows.
Spatial Convolution
Let us define the spatial convolution on a given graph first. The diagonal degree matrix and the graph Laplacian are defined as and respectively. Then the Singular Value Decomposition (SVD) is applied to Laplacian as , where consists of eigen vectors and is a diagonal matrix of eigen values. The matrix is the Graph Fourier Transform matrix, which transforms an input graph signal to its frequency domain . With the same notation in [henaff2015deep], the convolution of a graph signal with filter on is defined as
(1) 
where is the elementwise product.
Let’s define as the filter in frequency domain, then the convolution can be rewritten as
(2) 
The above graph convolution requires filter to have the same size as input signal , which would be inefficient and hard to train when the graph has a large size. To make the filter “localized” as in CNN, can be approximated as polynomials of [defferrard2016convolutional] so that and Eq 2 can be rewritten as
(3) 
Now the trainable parameters become whose size is restricted to . In addition, a node is only supported by its neighbors [hammond2011wavelets].
Then we use the convolution operation above to define the spatial convolution in STC layer. When computing the spatial convolution between feature map and kernel in the th layer of DSTGCNN, where is the channel number, the graphbased convolution defined above is applied to individual graph frame separately. In specific, each graph feature at th channel and th time step is individually filtered such that
(4) 
where and are the individual kernel and filtered output at the th channel and th time step, while tensor is the whole output.
Temporal Convolution
At each time, after the spatial convolution, traffic flows are fused on the underlying graph, resulting in a multilayered feature tensor compactly representing individual traffic flows and their spatial interactions.
However, information across time steps is still isolated. To obtain spatiotemporal features, many previous methods [jain2016structural, dai2017deeptrend, sutskever2014sequence] are based on recurrent models, which process sequential data iteratively stepbystep. Consequently, the information of current step is processed only when the information of all previous steps are done, which limits the efficiency of recurrent models.
To make temporal operations as efficient as a convolution, we perform a conventional convolution along the time dimension to extract the temporal relations, named after temporal convolution. For a feature tensor of size , its convolution with kernel of size is performed,
(5) 
where is the size of time window. To keep the size of the time dimension unchanged, we pad zeros on both sides of the time dimension.
Putting Together
By combining Eq. 4 and Eq. 5, we have the following definition of spatiotemporal graphbased convolution:
(6) 
whose structure is shown in Figure 2.
We now analyse the efficiency of our factorized convolution. Without such factorization, one needs to build a graph with nodes to capture both spatial and temporal structures, making the graph convolution in Eq. 2 have complexity of . While our STC layer builds graphs with nodes and separates spatial and temporal convolutions, has complexity of , which is much more efficient.
Twostep Prediction
The STC layers are able to jointly extract both spatial and temporal information from the sequence of traffic flows. We can build a hierarchical model using such layers to extract features and predict future flows from previous flows . A straight way is to directly predict future traffic volume after intervals as existing methods [dai2017deeptrend, yu2017spatio, defferrard2016convolutional]. This onestep prediction scheme is simple but has two disadvantages. First, it only uses ground truth data at to train the model but neglects those between and . Second, when is large, it is hard for onestep methods to capture traffic trends for such a long time, since the input and the future volumes may be very different.
To solve the above issues, we propose a new prediction scheme that divides the prediction problem into two steps. In the first step, we use previous flows to predict future volumes between and , which are called “close future flows”. During the training phase, the predicted “close future flows” are supervised by ground truth at the corresponding time period. As a result, ground truth data between and is imposed into training procedure. In the second step, the “target future flows” at time is predicted by considering both previous flows and the predicted “close future flows”. Compared with onestep methods, the prediction of “target future flows” is easier now since it utilizes “close future flows” and it only predicts one step further. The twostep prediction scheme is shown in the second path in Figure 1.
Let’s denote the models of the first step and the second step as and respectively. These two model both stacks several STC layers for prediction. The loss function of twostep prediction can be written as:
(7) 
where is the predicted “close future flows” and is the ground truth. and are parameters of two models respectively.
Auxiliary Information Embedding
Except for previous flows, some auxiliary information like time, the day of week and weather are useful to predict future flows. The influence of such information is studied in [hoang2016fccf, zhang2016deep]. For example, weekdays and weekends have very different transit patterns and a thunder storm can suddenly reduce the traffic volumes.
To make full use of such auxiliary information, we embed them into the traffic flow prediction network. We first encode these information into onehot vectors. Then these onehot vectors are concatenated and we use several fully connected layers to extract a feature vector. The feature vector is later reshaped so that it can be concatenated with traffic flow feature maps. Finally, the concatenated features are fed into prediction modules, as shown in Figure 1.
4.2 Graph Prediction Stream
In this subsection, we introduce the other stream in the framework, which is named as the graph prediction stream. Previous methods [henaff2015deep, jain2016structural, yu2017spatio] that model spatiotemporal graphs assume that the graph structure of spatiotemporal data is fixed without temporal evolutions. However, in real world applications, the graph structures are dynamic. For instance, in the traffic prediction problem, traffic conditions are timevariant, implying that the connectivities between vertices in graphs change over time. In order to model such dynamics, we introduce a stream in the framework to predict such timevariant graph structures.
In particular, at each time , we have a graph structure for STC layers in the model as a function of time . It reflects the average traffic condition in the period between time and . can not be directly computed since the future travel time during to is unavailable in the test phase. To address this problem, we introduce another path in the network to predict graph structure from previous travel time data . Specifically, are first converted to affinity matrices to construct a tensor , then it is fed into a subnetwork to predict a new affinity matrix representing for the average traffic condition during and , where is parameter of .
During training, the graph prediction stream is supervised by minimizing the following loss function
(8) 
where is the ground truth average affinity matrix during and . norm is used to avoid the loss from being dominated by some large errors. The Laplacian of is then computed and fed into STC layers. In this way, the prediction model takes the dynamic traffic conditions into consideration, thus it is able to make more accurate predictions on future traffic flows.
To model the relations of previous affinity matrices, a model with global field of view is required since entries of affinity matrices have “global” correlations. For instances, and is closely related no matter how apart they are located in . Thus, we stack multiple pairs of convolutional layers, where each pair consists of convolutional layers of kernel sizes and respectively to get the large spatial extent, as shown in the first stream of Figure 1.
4.3 The Whole Model
By combining the two paths, we get the full model of DSTGCNN shown in Figure 1. The loss function of the complete model is
(9) 
It is worth noting that DSTGCNN is a general method to extract features on spatiotemporal graph structured data, it can be applied to not only traffic flow prediction tasks, but also other more general regression or classification tasks on graph data, especially when the graph structure is dynamic. For instance, it can be adapted to skeletonbased action recognition or pose forecasting tasks with minor modification.
5 Experiments
In this section, we present a series of experiments to assess the performance of the proposed methods. We first introduce the datasets and the implementation details of DSTGCNN. Then we conduct ablation experiments to evaluate the effectiveness of components in DSTGCNN. At last, our method are compared with stateoftheart methods on these datasets.
5.1 Dataset and Evaluation Metrics
Our experiments are conducted on two public datasets: METRLA [jagadish2014big] and TaxiBJ [hoang2016fccf], and our collected dataset CDHW.
We first introduce two public available METRLA [jagadish2014big] and TaxiBJ [hoang2016fccf]. METRLA is a largescale dataset collected from 1500 traffic loop detectors in Los Angeles country road network. This dataset includes speed, volume and occupancy data, covering approximately 3,420 miles. As [li2017diffusion], we choose four months of traffic speed data from Mar 1st 2012 to Jun 30th 2012 recorded by 207 sensors for our experiment. The traffic data are aggregated every 5 minutes with one direction. TaxiBJ Dataset are obtained from taxis’ GPS trajectories in Beijing during 1st March to 30th June 2015. The authors partition Beijing into 26 highlevel regions and traffic volumes are aggregated in every 30 minutes, with two directions {In, Out}. Besides crowd flows, it also includes weather conditions that are categorized into good weather (sunny, cloudy) and bad weather (rainy, storm, dusty).
We also collect a new dataset that contains speed data recorded along highway around Chengdu city, China. This dataset is named as Chengdu Highway (CDHW) Dataset. CDHW dataset is capured by 1,692 roadside sensors during 1 Nov to 30 Nov 2019. Their locations are shown in Figure 3. Data before 25th Nov are used for training and the remaining for test. The speed data are colleced every 10 minutes with one direction.
For evaluation, we use the Root Mean Squared Error (RMSE) metric, the Mean Absolute Percentage Error (MAPE) and the Mean Absolute Error (MAE), which are defined as below:
(10)  
where and are the predicted and ground truth traffic volumes (speed) at time and location .
5.2 Implementation Details
Models and presented in subsection 4.1.2 consist of three STC layers with , , channels respectively. A ReLU layer is inserted between two STC layers to introduce nonlinearity as CNNs. Another ReLU layer is added after the last STC layer to ensure nonnegative prediction. In spatial convolution of STC layer, the order of polynomial approximation is set to be 5 and the temporal convolution kernel size is set to be . The graph prediction stream consists of three pairs of and convolutional layers with channels. The auxiliary information is encoded by two fully connected layers with and output neurons respectively, so that the output can be reshaped and concatenated with flow features. In the training procedure, we first pretrain the dynamic graph learning subnetwork for epochs and jointly train the whole model for epochs. The model is trained by SGD with momentum. The first 50 epochs take a learning rate of and the last 50 epochs use . Finally, the framework is implemented by PyTorch.
Method  Out Volumes  In Volumes  

MAE  RMSE  MAPE  MAE  RMSE  MAPE  
Basel  10.49  13.48  13.11%  10.71  14.44  14.46% 
Basel+AE  10.24  13.18  12.81%  10.41  13.96  14.35% 
Basel+AE+GP  10.03  12.88  12.75%  10.40  13.94  14.39% 
DSTGCNN  9.93  12.78  12.56%  10.24  13.78  14.02% 
T  Metric  ARIMA  FNN  FCLSTM  DCRNN  STGCN  DSTGCNN 

MAE  3.99  3.99  3.44  2.77  2.87  2.68  
15 min  RMSE  8.21  7.94  6.30  5.38  5.54  5.35 
MAPE  9.6  9.9  9.6  7.3  7.4  7.2  
MAE  5.15  4.23  3.77  3.15  3.48  3.01  
30 min  RMSE  10.45  8.17  7.23  6.45  6.84  6.23 
MAPE  12.7  12.9  10.9  8.8  9.4  8.5  
MAE  6.90  4.49  4.37  3.60  4.45  3.41  
60 min  RMSE  13.23  8.69  8.69  7.59  8.41  7.47 
MAPE  17.4  14.0  13.2  10.5  11.8  10.3 
5.3 Ablation Study
To investigate the effectiveness of each component, we first build a plain baseline model which stacks three STC layers as while uses onestep prediction scheme, keeps graph structure fixed and does not use auxiliary information. The static graph structure is calculated by averaging all traffic time in training set. Then different configurations are tested, including:

The baseline model denoted as denoted as Basel;

The baseline model with auxiliary information embedding (AE), denoted as Basel+AE;

The above configuration plus dynamic graph learning (DG), denoted as Basel+AE+DG;

The above configuration plus twostep prediction (TP) introduced in 4.1.2, which is the full model denoted as Basel+AE+DGL+TP or DSTSCNN.
The experimental results evaluated on the TaxiBJ test set of all configurations are reported in Table 1. We predict two time steps ahead in all configurations. We can observe that each proposed component consistently reduces the prediction errors and the full model achieves the best performance. The results demonstrate that the auxiliary information embedding, the graph prediction stream and the twostep prediction scheme are all beneficial and complementary to each other. The combination of them accumulates the advantages, therefore achieves the best performance.
5.4 Experiments on METRLA Dataset
In this subsection, we evaluate the prediction performance of DSTGCNN and the compared methods on METRLA dataset. We compare DSTGCNN with five different methods, including: 1) AutoRegressive Integrated Moving Average (ARIMA), which is a wellknown method for timeseries data forecasting and is widely used in traffic prediction; 2) Feed Forward Neural Network (FNN) with two hidden layers and L2 regularization. 3) Recurrent Neural Network with fully connected LSTM hidden units (FCLSTM) [sutskever2014sequence]; 4) DCRNN [li2017diffusion]. This is a recent method which utilizes diffusion convolution and achieves decent results on METRLA; 5) STGCN [yu2017spatio]. This is a spatiotemporal graph convolutional networks that uses a fixed affinity matrix.
Table 2 shows the comparison results on METRLA dataset. For all predicting horizons and all metrics, our method outperforms both traditional statistical approaches and deep learning based approaches. This demonstrates the consistency of our methodâs performance for both shortterm and longterm prediction.
In Figure 4, we also show the qualitative comparison of prediction in a day on the METRLA dataset. It shows that DSTGCNN can capture the trends of morning peak and evening rush hour better. As can be seen, DSTGCNN predicts the start and end of peak hours which are closer to the ground truth. In contrast, DCRNN does not catch up with the change of traffic data.
5.5 Experiments on TaxiBJ Dataset
We also compare the proposed methods with the stateofthearts on TaxiBJ dataset. The compared methods include: 1) Seasonal ARIMA (SARIMA); 2) vector autoregression model (VAR); 3) FCCF [10.1145/2996913.2996934]; 4) FCLSTM [sutskever2014sequence]; and 5) DCRNN [li2017diffusion]. FCCF utilizes both volume data and auxiliary information including time and weather. Note that we follow the experiments in FCCF that only predict volumes in the next step (30 min later), thus the twostep prediction in our model is not applied. The results by FCCF, SARIMA and VAR were reported in [10.1145/2996913.2996934]. Since only the RMSE results are provided for SARIMA, VAR and FCCF, we compare with these three methods in terms of RMSE metric. For FCLSTM and DCRNN, we use their default experimental settings in the corresponding papers, and the results are compared in terms of RMSE, MAP and MAPE. We show the results in Table 3.
Method  Out Volumes  In Volumes  

MAE  RMSE  MAPE  MAE  RMSE  MAPE  
SARIMA    21.2      18.9   
VAR    15.8      15.8   
FCCF    14.2      14.1   
FCLSTM  11.32  14.4  13.67%  11.92  15.3  17.30% 
DCRNN  10.49  13.8  13.11%  10.71  14.5  14.46% 
DSTGCNN  9.38  12.0  11.9%  9.3  12.62  13.27% 
From Table 3, we can see that the proposed DSTGCNN achieves the best performance. The comparison results suggests that the proposed STC layer combined with the graph prediction stream is very effective at future traffic prediction. Although the twostep prediction strategy is not utilized in the case of predicting onestep ahead, our method still models the spatiotemporal dependency and the dynamic graph structure robustly.
5.6 Experiments on CDHW Dataset
In this subsection, we evaluate the prediction performance of DSTGCNN and the compared methods on CDHW dataset. Because deep learning methods have shown better performance than traditional methods, we only compare our methods with DCRNN and STGCN. Table 4 shows the comparison results.
T  Metric  DCRNN  STGCN  DSTGCNN 

30 min  MAE  7.92  7.76  6.33 
RMSE  12.10  11.77  10.18  
MAPE  12.4  12.2  10.9  
60 min  MAE  9.22  8.82  7.91 
RMSE  14.32  13.61  12.32  
MAPE  16.3  15.9  14.3 
We can observe that our method outperforms the recently deep learning based approaches DCRNN and STGCN. The reason is that: our method takes the dynamic topology of traffic network into consideration while the existing methods donât. As a result, our method can capture the propagation of traffic trends better. Furthermore, our method could avoid error propagation as in DCRNN.
5.7 Experimental Results analysis
The reasons that our method achieves new stateoftheart are from the following aspects. Compared with traditional methods, our deep model has larger capacity to describe the complex data dependency in traffic network. Second, our method takes the dynamic topology of traffic network into consideration while the existing methods donât. As a result, our method can capture the propagation of traffic trends better. Finally, our network is also carefully designed for traffic prediction. The twostep prediction scheme breaks longterm predictions into two shortterm predictions and makes the predictions easier.
6 Conclusion and Future Work
In this paper, we propose an effective and efficient framework DSTGCNN that can predict future traffic flows using surveillance videos. DSTGCNN is able to capture both the dynamics and complexity in traffic. The experiments indicate that our method outperforms other stateoftheart methods. In the future, we plan to apply the framework to other traffic prediction tasks like pedestrian crowd prediction.