Parameters Calibration for Power Grid Stability Models using Deep Learning Methods
This paper presents a novel parameter calibration approach for power system stability models using automatic data generation and advanced deep learning technology. A PMU-measurement-based “event playback” approach is used to identify potential inaccurate parameters and automatically generate extensive simulation data, which are used for training a convolutional neural network (CNN). The accurate parameters will be predicted by the well-trained CNN model and validated by original PMU measurements. The accuracy and effectiveness of the proposed deep learning approach have been validated through extensive simulation and field data.
Maintaining high-fidelity power system stability models is of critical importance to ensuring the secure economic operation and planning of today’s power grid considering its increasing stochastic and dynamic behavior. Traditional power grid stability model validation and parameter calibration are based on stage testing that takes generators offline, which could be quite costly and time-consuming . Over the past few years, the wide implementation of phasor measurement units (PMUs) has made it possible to directly use online measurements for model validation. A particle swarm optimization (PSO)-based approach was proposed for stability model parameter calibration through a simultaneous perturbation stochastic approximation . However, this method is time and effort consuming as it requires extensive iterations between dynamic simulations and the PSO. An advanced ensemble Kalman filter (EnKF)-based method was proposed and integrated with the commercial software package for the generator parameter calibration . However, it requires modification in the source code of the stability models in the commercial software for accommodating the EnKF algorithm; thus, its accessibility and availability is currently limited.
In this letter, we have proposed a novel stability model parameter calibration method using automatic data generation and deep learning technology. An “event playback” approach that uses the real and reactive power responses of the under-examining stability model to identify inaccurate parameters is proposed. Extensive simulation data will be automatically generated using the stability model, where the identified parameters are randomly perturbed. These generated data are used to train a convolutional neural network (CNN), in which the output will be the predicted parameter values. The CNN preserves the translation invariance of the input data and it is robust to measurement noises. Advanced deep learning techniques such as rectified linear unit (ReLU) activation and dropout regularization have also been used to improve the neural network performance . Extensive studies using simulated and field data have validated the accuracy and effectiveness of the proposed deep learning approach.
Ii Proposed Approach
The proposed deep learning approach calibrates stability model parameters in a systematic manner. The first step is to determine if inaccurate parameters exist or not using the “event playback” concept shown in Fig. 1. The PMU voltage and phase angle measurements are used to run a dynamic simulation with acknowledged model parameters. The simulated dynamic response of the subsystem, including active and reactive power, will be compared with the corresponding PMU power measurements. A large difference between the simulated and measured data indicates an discrepancy of the model parameters, thus, a calibration procedure is required.
The second step is to identify a group of potential inaccurate parameters. Because a typical stability model contains dozens of parameters, narrowing the number of targeted parameters would significantly increase the calibration efficiency. A sensitivity analysis method  is used to identify potential inaccurate parameters:
where and are the original value and small perturbation of parameter , is the time response, and is the number of total time steps. This sensitivity analysis will wipe out parameters with zero or very small sensitivity.
The third step is to automatically generate extensive generator dynamic data respective to those potential inaccurate parameters. In this paper, we randomly selected parameter values that are 50% to 200% of their original values for the data generation. The generated data were used to train a multi-output CNN model to predict the parameter values, as shown in Fig. 2. The first two layers of this CNN model are convolutional layers, and each layer consists of a convolution function, a ReLU activation and a maximum pooling. The output of the convolution and ReLU is
where and are the filter feature and input, is the filter size, and is the bias term. The maximum pooling will simply take the largest value in the pooling region as the output of the convolutional layer. The objective of the first two convolutional layers is to capture some common detailed input data patterns, such as a sudden drop or increase in the waveform. Parallel branches of convolutional and dense layers are connected to the two common layers for predicting different parameters individually. The proposed structure enables feature sharing of CNN parameters, which dramatically reduces the computation burden. Dropout regularization layers are used in the CNN model to prevent over-fitting problems. The CNN model uses the root-mean-square (RMS) error as the loss function for training the neural network.
The final step is feeding the measured PMU data to the well-trained CNN model to predict the values of those potential inaccurate parameters. The accuracy of the proposed deep learning approach can be validated through another round of “event playback” that compares the measured PMU data with the simulated data using the CNN predicted parameters.
Iii Test Results
The developed approach was tested on a real-world power generating unit, which consists of a synchronous generator, an exciter, a power system stabilizer, and a governor. The corresponding standard models are GENROU, ESST1A, GGOV1, and PSS2A . When large changes occurred in the measured PMU data, we ran the “event playback” and noticed a mismatch between the measured and simulated data. The mismatch indicated an discrepancy of the model parameters. A sensitivity analysis was carried out and the identified candidate parameters with higher sensitivity are the inertial for the generator, the gain and time constant for the exciter, and the gain for the governor.
Around 10,000 simulated generator dynamic response data were automatically generated with the targeted parameters that randomly varied between 50% to 200% of the original values through “event playback”. 90% data were used for the training and validation of the CNN model, and the other 10% data were used for testing. We also compared the CNN model with a conventional machine learning neural network called multi-layer perceptron (MLP). The two models have similar sizes and numbers of layers, and therefore are comparable.
Remark 1: A Windows server with 32 cores of 3.20 GHz Intel Xeon CPUs, 128G Memory, was used to generate the simulation data in parallel, and it took around 30 minutes to generate the 10,000 data. The training of the CNN model took another 15 minutes by using the TensorFlow library .
Fig. 3 shows the normalized losses (RMS error) of the training and validation process for both the CNN and MLP models. The CNN model has much smaller losses than the MLP model, indicating the CNN is able to predict the parameter values more accurately. Note that the training loss is higher than the validation loss, because the dropout regularization is used during the training process and deactivated in the validation process. The absolute error histograms of the above targeted parameters in the testing results are shown in Fig. 4, and the corresponding statistics are presented in Table I. It is shown that the prediction errors of the CNN model are much smaller than those of the MLP model.
|Parameter||MLP Error (%)||Cnn Error (%)|
The well-trained CNN and MLP models are used to calibrate the targeted parameters by feeding the actual PMU measurements to the neural network models. The results are listed in Table II. The calibrated parameters of both the CNN and MLP models are obviously different from the original values, with the exception of the parameter (less than 2%). To test the effectiveness of the deep learning approaches, real-world PMU measurements are compared with the simulated data that are generated with the calibrated parameters of both the CNN and MLP models, as shown in Fig. 5. The PMU measurements (in blue) are largely different than simulated data using original parameters (in red). The calibration accuracy has been improved after using either the CNN (in black) or the MLP (in brown) method. The CNN method has outperformed the MLP method, as its generated curves fit the PMU measurements better than those of the MLP method.
This letter proposes a novel parameter calibration approach for power system stability models. We have developed a systematic procedure to identify and calibrate potential inaccurate model parameters, using automatic data generation and advanced deep learning techniques. The performance of the proposed CNN model has been compared with a conventional MLP model using both simulated data and real-world data. Results have validated the accuracy and effectiveness of the proposed CNN-based approach in calibrating stability model parameters.
-  NERC, “Standard MOD-026-1 â Verification of Models and Data for Generator Excitation Control System or Plant Volt/Var Control Functions,” Standard, 2012.
-  C.-C. Tsai, , and et al., “Practical considerations to calibrate generator model parameters using phasor measurements,” IEEE Transactions on Smart Grid, vol. 8, no. 5, pp. 2228–2238, 2017.
-  R. Huang and et al., “Calibrating parameters of power system stability models using advanced ensemble kalman filter,” vol. 33, no. 3, 2018.
-  Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” nature, vol. 521, no. 7553, p. 436, 2015.
-  Siemens PTI, PSS/E 33.5 Model Library, 2013.
-  M. Abadi and et al., “Tensorflow: A system for large-scale machine learning,” in 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), 2016, pp. 265–283.