Precipitation nowcasting using a stochastic variational frame predictor with learned prior distribution
We propose the use of a stochastic variational frame prediction deep neural network with a learned prior distribution trained on two-dimensional rain radar reflectivity maps for precipitation nowcasting with lead times of up to 2 1/2 hours. We present a comparison to a standard convolutional LSTM network and assess the evolution of the structural similarity index for both methods. Case studies are presented that illustrate that the novel methodology can yield meaningful forecasts without excessive blur for the time horizons of interest.
Precipitation nowcasting using a stochastic variational frame predictor with learned prior distribution
Alex Bihlo Department of Mathematics and Statistics Memorial University of Newfoundland firstname.lastname@example.org
Keywords Precipitation nowcasting deep learning variational autoencoder generative modeling
Accurate short-range (0 to 6 hours) precipitation forecasts are of great practical importance for various stakeholders, including airport authorities, event planers and insurance companies. While long-range precipitation forecasts are traditionally obtained from numerical weather prediction models, short-term forecasts are more challenging to obtain. The shorter lead times and higher resolutions required do not mesh well with traditional numerical weather predictions, which are costly to obtain and might not capture the initial precipitation distribution correctly, despite much recent improvements owing to increased computational power and improved forecasting strategies.
On the very short time scale typically other strategies are being used, with the most prominent one being extrapolation based methods. These methods generally compute an optical flow based on the last few precipitation fields by calculating an approximate flow velocity and then using semi-Lagrangian advection to move areas of precipitation along the calculated flow field, see e.g. . A main complication of these methods is that they require appropriate pre-processing of the raw radar image data, such as smoothing of the input data or image segmentation to label contiguous areas of precipitation. As an alternative to these methods, recent years have seen an increased interest in machine learning based strategies. These methods have the advantage of being end-to-end trainable, with minimal, if any, pre-processing required.
Here we propose a novel deep learning approach to precipitation nowcasting. We implement a variation of the model proposed by , which was used in the field of video frame prediction. The method employs an encoder–prediction–decoder framework which resembles a conditional variational autoencoder , whose latent space prior distribution is learned based upon previous video frames. We then train the proposed model on a short series of radar frames with the aim to forecast the precipitation over the next several hours.
2 Related work
The precipitation nowcasting problem has seen considerable interest in the meteorological community, with several decades of research spent on formulating increasingly refined extrapolation based methods. A main reason for the prevalence of extrapolation based methods compared to direct numerical weather predictions is that numerical models generally do not capture the initial precipitation well . Various different methods using extrapolation-based approaches can be found in [2, 11, 20], as well as in the references therein.
Here we are exclusively concerned with machine learning methodologies, in particular those based on deep learning. This is a relatively recent application field of machine learning and a fundamentally different approach to the precipitation nowcasting problem compared to the so far dominant numerical or extrapolation based approaches. Recent examples for the deep learning methodology using convolutional recurrent neural networks can be found in the seminal paper , as well as in .
It is important to stress that precipitation forecasting based on radar images is an ill-posed problem: The radar reflectivity maps do not contain the entire physically relevant information to uniquely forecast the rainfall rates in future radar maps. In this sense, the precipitation forecasting problem can be regarded as a particular case of video frame prediction, which is also an ill-posed problem. There are multiple possible solutions to this problem and a machine learning system aims to find the most probable ones.
Several approaches to video frame prediction using deep learning have been proposed in recent years, in particular methods based on LSTM autoencoders-like networks [1, 4, 5, 16], generative adversarial network [10, 13, 17] and probabilistic approaches .
The description of the model used here follows . We use a prediction model within an encoder–decoder framework that maps the single given input radar image from physical space to latent space, where it is passed through a stack of convolutional LSTM layers, and back to physical space, yielding aiming to closely approximate the next true radar image .
The prediction in latent space is conditioned upon a random vector that is sampled from a multivariate Gaussian prior distribution , that is learned from the previous series of input frames, . The main idea is that the learned prior distribution encodes important stochastic information about the next frame that is not included in the deterministic prediction model. For the precipitation nowcasting problem, we expect the prior model to learn the basic physical rules by which precipitation cells evolve and move in time and space.
The prior model distribution is trained with a separate inference model, both in practice being recurrent neural networks, that takes as input the target frame and aims to compute the distribution , where the dependence on the previous frames is due to the recurrent nature of the neural network structure for the inference model chosen.
The similarity between the distribution and the prior distribution is enforced by including a Kullbeck–Leibler divergence term in the cost function to be optimized. Note that the prior distribution depends on the previously seen radar images, but does not include the frame which will be predicted next.
The model optimizes the variational lower bound
which will jointly train the encoder–prediction–decoder network as well as the prior model network. The first term in the cost function corresponds to the reconstruction loss (in our case simply the -loss between predicted and observed radar images), the second term is the Kullback–Leibler divergence. Note that this cost function is different from the standard variational autoencoder for which . As was found in , the parameter allows for tuning the balance between the latent space capacity and the reconstruction accuracy.
The general model architecture is represented in Figure 1.
The difference between our model and the one developed in  is that we use convolutional LSTM layers for the prediction model only and regular LSTM layers for the inference and the prior model rather than convolutional LSTM layers for all model components. We also do not implement skip connections between the last encoder and first decoder layer since our backgrounds typically correspond to areas of zero precipitation which is readily learned by the model without using skip connections.
The encoder network consists of four stacked, stride-2 convolutional layers with 16, 32, 64, and 128 filters, respectively. The filter size is for the first two layers and for the last two layers. No pooling or dense layers are used in the encoder and decoder networks. The latent space prediction model consists of two stacked convolutional LSTM layers with filters and a filter size of each. The inference and prior model add an extra stride-2 convolutional layer with 128 filters and filter size of before a single conventional LSTM with 64 neurons and a densely connected layer with linear activation and 70 neurons, which corresponds to the dimension of our latent space.
The model is trained on radar images each, sampled every minutes, with the aim to forecast the next images, i.e. up to 2 1/2 hours in the future. Each radar frame is forecast separately and the multi-step forecast is accomplished by including the generated frames back into the input stack of the prediction model.
The model is implemented in Python3 using Keras  with TensorFlow backend. Initial training was done using one NVIDIA T4 GPU on the interactive Google Colab platform, production runs were carried out using one NVIDIA P100 GPU on the Compute Canada cluster Graham.
Training was done using the Adam optimizer  with a default learning rate of . The parameter was set to , which corresponded to the optimal value tested in our hyper-parameter studies within the range of . This value is consistent with the one used in . The training data size consisted of approximately 10,000 samples of six frames each (5 input frames, 1 prediction frame). A total of 10% of the training data was held out for validation. The test data consisted of 1,000 samples (5 input frames, 1 prediction frames; multi-frame predictions were obtained by feeding the predicted image back into the stack of 5 input frames). Early stopping was employed in all experiments.
The proprietary radar data was provided by Austro Control, the Austrian air navigation service provider, and consists of two-dimensional reflectivity maps recorded by four weather radar stations (Feldkirchen, Patscherkofel, Rauchenwarth, Zirbitzkogel) over the Austrian region. The original resolution of the data is 1 km 1 km recorded every minutes, but has been down-sampled to 5 km 5 km (giving image sizes of pixels) every minutes to allow for reasonably fast training times, and for mini-batches of images to still fit into the memory of a single GPU. The data consists of converted rain rates, which employs the standard – (radar reflectivity to rain rate) relation of the form with the empirical constants and according to Marshall and Palmer , see also [7, 18]. The converted rain rates (in mm/h) fall into classes, which our method aims to predict in an end-to-end fashion. For more information on the radar data, see .
A total of 5 years of data, from January 2014 to December 2018 has been used in this studies, with 4 years being used for training and 1 year for testing. To remove cases with little to no precipitation we require that the cumulative rain rate over the entire image domain is at least 10,000 mm/h for each valid training and test case.
As a baseline comparison we use a two-layer convolutional LSTM model with filters of size , similar to the one that was found in  to give the best results for the precipitation nowcasting problem. We have tested other two-layer (using filters each) and three-layer (using filters each) convolutional LSTM architectures as well, which generally yielded worse results than those presented, possibly due to over-fitting.
Note that the convolutionsl LSTM models could be trained to predict the next frames in one sweep based on the given frames. We abstain from doing this and rather predict one frame at the time and feed the predicted frame back in the stack of input frames. The reason for doing this is that the prediction in one sweep would not allow for a coupled dynamics among subsequent image frames to develop over the course of the prediction horizon, in contrast to the case when each individually predicted frame is fed back into the stack of input frames. The latter possibility is also more flexible in that we can choose the prediction horizon to be as long as we wish, which is an appealing practical feature for precipitation nowcasting with various lead times.
We compute the structural similarity index (SSIM) over the test data for each of the 10 frames for every prediction. The time series of the mean over the test data set is presented in Figure 2. For the sake of completeness, we also show the reconstruction quality of the conditional variational autoencoder method (frames 1 to 5). The prediction starts at frame 6. Figure 2 illustrates that the vanilla convolutional LSTM network gives better predictions for the first two frames but its forecast quality drops significantly afterwards. The stochatistic variational method in turn gives better longer term forecasts, with the drop in SSIM being significantly less compared to the convolutional LSTM network. This is consistent with the observation that convolutional LSTM networks tend to blur later frames when used in frame-by-frame video prediction, which was one main motivation for the stochastic variational frame prediction method being introduced in . In the present case, this translates to sharper predictions of precipitation cells with longer lead times, an attractive feature for the real-world nowcasting problem.
In Figure 3 we present several individual test cases for the two models, which include both advective and convective precipitation. Results for the stochastic variational frame predictor model are averaged over 10 individual realizations from the model. The stochastic variational frame predictor network typically captures the essential evolution of the precipitation fields, while the standard convolutional LSTM network usually either under- or overestimates the development of the precipitation cells. Additionally, the convolutional LSTM network adds considerably more blur to frames further in the future, which is a well-know phenomenon observed in the video frame prediction literature, see e.g. [4, 13].
We have applied a stochastic variational frame predictor network with a learned prior distribution to the precipitation nowcasting problem. We found the network easy to train in practice, in particular compared to the standard convolutional LSTM networks, which requires relatively long training cycles. The model attempts to encode stochastically meaningful information about the next frames by training a frame-dependent prior distribution. For the precipitation nowcasting problem we interpret this as attempting to learn the physics of the current precipitation cells with the aim to better forecast the following frames. The model is able of producting meaningful longer time precipitation predictions, in particular when compared to the standard convolutional LSTM model, which excessively blurs the predicted precipitation fields.
Another advantage of the stochastic variational model is that it allows one to predict more than just a single possible future for the evolution of precipitation fields. This is important since the precipitation nowcasting problem is not well-posed, meaning there are many possible evolutions for a given set of input radar frames. Being able to incorporate a level of uncertainty of the future development of active weather regions is of great practical importance in numerical weather prediction, which in recent years has increasingly moved towards ensemble prediction strategies. The fully deterministic convolutional LSTM model is not capable of incorporating stochastic variations and thus can only predict one single deterministic future. Exploring the use of the stochastic frame predictor model with learned prior distribution within the setting of ensemble weather prediction is a potentially promising avenue for future research.
This research was supported, in part, thanks to the Canada Research Chairs and the NSERC Discovery grant programs and by support provided by ACENET (https://www.ace-net.ca/) and Compute Canada (www.computecanada.ca). The author is grateful to Oliver Stueker for technical help regarding the use of the cluster Graham, and to Markus Kerschbaum, Johannes Sachsperger and Martin Steinheimer at Austro Control for providing the radar data.
-  Babaeizadeh M., Finn C., Erhan D., Campbell R.H. and Levine S., Stochastic variational video prediction, arXiv preprint arXiv:1710.11252 (2017).
-  Bowler N.E., Pierce C.E. and Seed A., Development of a precipitation nowcasting algorithm based upon optical flow techniques, Journal of Hydrology 288 (2004), 74–91.
-  Chollet F. et al., Keras, https://keras.io, 2015.
-  Denton E. and Fergus R., Stochastic video generation with a learned prior, arXiv preprint arXiv:1802.07687 (2018).
-  Finn C., Goodfellow I. and Levine S., Unsupervised learning for physical interaction through video prediction, in Advances in neural information processing systems, 2016 pp. 64–72.
-  Higgins I., Matthey L., Pal A., Burgess C., Glorot X., Botvinick M., Mohamed S. and Lerchner A., -VAE: Learning basic visual concepts with a constrained variational framework, in International Conference on Learning Representations, vol. 3, vol. 3, 2017 .
-  Kaltenboeck R. and Steinheimer M., Radar-based severe storm climatology for Austrian complex orography related to vertical wind shear and atmospheric instability, Atmospheric Research 158 (2015), 216–230.
-  Kingma D.P. and Ba J., Adam: A method for stochastic optimization, arXiv preprint arXiv:1412.6980 (2014).
-  Kingma D.P. and Welling M., Auto-encoding variational Bayes, arXiv preprint arXiv:1312.6114 (2013).
-  Lee A.X., Zhang R., Ebert F., Abbeel P., Finn C. and Levine S., Stochastic adversarial video prediction, arXiv preprint arXiv:1804.01523 (2018).
-  Lin C., Vasić S., Kilambi A., Turner B. and Zawadzki I., Precipitation forecast skill of numerical weather prediction models and radar nowcasts, Geophysical research letters 32 (2005).
-  Marshall J.S. and Palmer W.M.K., The distribution of raindrops with size, Journal of meteorology 5 (1948), 165–166.
-  Mathieu M., Couprie C. and LeCun Y., Deep multi-scale video prediction beyond mean square error, arXiv preprint arXiv:1511.05440 (2015).
-  Shi X., Chen Z., Wang H., Yeung D.Y., Wong W.K. and Woo W.c., Convolutional LSTM network: A machine learning approach for precipitation nowcasting, in Advances in neural information processing systems, 2015 pp. 802–810.
-  Shi X., Gao Z., Lausen L., Wang H., Yeung D.Y., Wong W.k. and Woo W.c., Deep learning for precipitation nowcasting: A benchmark and a new model, in Advances in Neural Information Processing Systems, 2017 pp. 5617–5627.
-  Srivastava N., Mansimov E. and Salakhudinov R., Unsupervised learning of video representations using lstms, in International conference on machine learning, 2015 pp. 843–852.
-  Vondrick C. and Torralba A., Generating the future with adversarial transformers, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017 pp. 1020–1028.
-  Wilson J.W. and Brandes E.A., Radar measurement of rainfall—a summary, Bulletin of the American Meteorological Society 60 (1979), 1048–1060.
-  Xue T., Wu J., Bouman K. and Freeman B., Visual dynamics: Probabilistic future frame synthesis via cross convolutional networks, in Advances in Neural Information Processing Systems, 2016 pp. 91–99.
-  Zahraei A., Hsu K.l., Sorooshian S., Gourley J., Lakshmanan V., Hong Y. and Bellerby T., Quantitative precipitation nowcasting: a Lagrangian pixel-based approach, Atmospheric Research 118 (2012), 418–434.