Conductor Galloping Prediction on Imbalanced Datasets: SVM with Smart Sampling

Conductor Galloping Prediction on Imbalanced Datasets: SVM with Smart Sampling

Kui Wang, Jian Sun, Chenye Wu, and Yang Yu Institute for Interdisciplinary Information Sciences
Tsinghua University, Beijing, 100084, P.R. China

Conductor galloping is the high-amplitude, low-frequency oscillation of overhead power lines due to wind. Such movements may lead to severe damages to transmission lines, and hence pose significant risks to the power system operation. In this paper, we target to design a prediction framework for conductor galloping. The difficulty comes from imbalanced dataset as galloping happens rarely. By examining the impacts of data balance and data volume on the prediction performance, we propose to employ proper sample adjustment methods to achieve better performance. Numerical study suggests that using only three features, together with over sampling, the SVM based prediction framework achieves an F-score of 98.9111This work has been supported in part by National Key R&D Program of China (2018YFC0809400), the Youth Program of National Natural Science Foundation of China (No. 71804087), and Turing AI Institute of Nanjing..

Conductor galloping, Feature extraction, Machine learning, SVM

I Introduction

Most modern power system control frameworks are designed for (near) normal operation conditions, which makes power system vulnerable to the risks posed by extreme weathers. Hence, the damage prediction due to extreme weather is critical to maintain a reliable power system. In this paper, we target to design the prediction framework for conductor galloping, which often happen in windy and humid conditions, and may result in huge damages to the power grid, such as tripping, bolt looseness and even pole collapse accident [9]. Such damages do happen. In June, 2018, extreme hail wind caused conductor galloping, which resulted in severe pole collapse accident (as shown in Fig. 1) in Shunyi District, Beijing, P.R. China.

The conventional wisdom for conductor galloping prediction is to use a model based approach, which examines the physical process of galloping and analyzes the trigger conditions in the process. The seminal work is the transmission line vibration model, proposed by Den Hartog in 1932 [7]. Based on this model, Nigol and Clarke introduced the physical process of galloping and set the stage in 1974 [11]. Since then, there are only minor modifications towards the model based understanding of galloping. With the advance in sensing technology, it is now possible to take a data-driven approach for galloping prediction.

Fig. 1: Pole collapse induced by conductor galloping [6].

I-a Challenges and Opportunities

Forecasting extremal event is a challenge for the data science, primarily due to extremely small sample size. For instance, it is hard to improve the predication accuracy for the critical peak load, while the general load forecast has already been accurate [12]. In the most recent study, the F-score of galloping prediction is far from satisfactory due to the limited data availability (e.g. 83% in [5]). Such limitation also challenges a wide range of emerging algorithmic technologies, such as machine translation [4] and recommendation systems [1]. Thus, improving the extremal event forecast based on limited data is widely beneficial.

The most recent deployment of the smart grid monitoring meters across North China has collected a sizable dataset for conductor galloping. In this research, we seek to develop a data-driven prediction model based on this dataset. Specifically, we adopt models to facilitate the investigation on the tensions between three questions:

  1. Which features are important in the prediction model?

  2. How will the data imbalance affect the prediction accuracy under different data volume?

  3. What is the role of dataset volume in galloping prediction?

The answers inspire us to propose the smart sampling approach, which improves the dataset quality, yielding a better prediction accuracy. Figure 2 plots the paradigm of our efforts towards designing the prediction framework.

Fig. 2: Our framework for galloping prediction analysis.

I-B Literature Review

Towards answering the aforementioned three questions, we identify two major related research directions. The first one focuses on feature extraction, which contains a rather rich literature. We refer interested readers to an excellent survey [2] for more details.

Another related research direction investigates the data imbalance issues in machine learning. For example, Liu et al. conduct an empirical study to highlight how class-imbalance affects the performance of cost-sensitive classifiers in [10]. Take support vector machine (SVM) as an example, Much efforts have been spent towards tackling the challenges in applying SVM on imbalanced datasets: utilizing the information-loss-minimization principle [13], adjusting the class boundary based on kernel-boundary alignment algorithm [14], etc.

The research on designing customized machine learning algorithm for conductor galloping prediction is very limited. To the best of our knowledge, we are the first to design the prediction framework with an emphasis on understanding how to best utilize the information in the imbalanced dataset.

I-C Our Contributions

In seek of designing the customized prediction framework for conductor galloping, our principal contributions can be summarized as follows:

  • Feature Extraction: We identify the determinants to trigger conductor galloping by observing the data distribution and validate our observations with the model based approach. The prediction model with the identified features achieves an F-score of 98.9.

  • Assess the Value of Data: By proposing the prediction framework, we investigate how the imbalance in the dataset limits the prediction performance, which in turn reveals the true value of heterogeneity in a dataset.

  • Smart Sampling Approach: We design a smart sampling approach to improving the dataset quality for better prediction accuracy. We highlight via numerical studies the value of this approach when the volume of the dataset is limited.

The rest of our paper is organized as follows. Section II overviews our galloping dataset and revisits the theoretical models for galloping detection. In this paper, we choose SVM for the prediction framework and we introduce the evaluation metrics and feature extraction in Section III. From extensive numerical studies in Section IV, we exploit the sample adjustment approach for imbalanced datasets to achieve better prediction accuracy. Finally, concluding remarks and future directions are given in Section V.

Ii Data and Model: the Basics

The historical galloping data is collected from October 2017 to January 2018 in China. Among the 80,596 meter collected samples, 25,414 pieces are galloping samples. We plot the distributions of 8 features (wind speed, humidity, temperature, precipitation, ice-thickness, wind-line angle, vertical wind speed and amplitude) in Fig. 3. This figure indicates that besides imbalanced sample size (only 30% galloping data), the feature distributions show even more severe imbalance. This highlights the urgent need for customizing the prediction framework for galloping. Note that for the long distance overhead power lines, the wind-line angle is generally not very well defined. Hence, in the subsequent analysis, we only use the other 7 features.

Fig. 3: The distributions for features in galloping and normal samples. The “KLdiv” refers to the KL divergence of feature distribution in the two sample groups.
Fig. 4: Distributions of the two sample groups in 2-feature hyper planes. (a) and (c) demonstrate the distribution of samples’ true labels in the wind speed-temperature plane and temperature-precipitation plane, respectively. (b) and (d) highlight the imposters in this prediction model.

Before directly diving into the machine learning analysis, we would like to first revisit the theoretical model for conductor galloping [7]: the trigger condition is as follows:


where and are the aerodynamic lift and drag coefficient of the conductor, and is the angle of attack from wind. Note that and are also functions of wind speed and wind line angle. This theoretical result gives us the first cut in identifying the important features for the prediction framework.

Iii Metrics and Feature Extractions

In this paper, we select a conventional SVM model with the Gaussian kernel [8] to establish the prediction framework.

Iii-a Performance Metrics

In the machine learning literature, the most widely adopted metrics for performance evaluation are recall and precision. Recall measures that among the galloping samples, how well our predictor can predict the extremal events. Denote the true label and predicted label of the predictor for sample by and , respectively. And assume galloping samples are labelled by 1 while the other samples are labelled by -1. Then,


where is the indicator function.

On the other hand, precision measures that among those samples being predicted to gallop, how many of them are actually with true label of galloping. More precisely,


In this paper, we adopt a single metric F-score to measure the comprehensive performance over recall and precision [15], which is defined as follows:


The test set is randomly selected from the whole dataset (at a 25% proportion), disjoint from the training set.

Iii-B Feature Extraction

To extract the most powerful set of features on the conductor-galloping prediction, we test the F-score performance of all the 127 possible combinations among the 7 features. Surprisingly, we can use only three features and achieve remarkably good performance with an F-score of 98.4. These three features are wind power, temperature and precipitation. Figure 4 plots the projections of the samples onto two hyper planes. It is evident that with respect to wind speed and precipitation, temperature is a good classifier. Figure 4 also shows the classification results with minor imposters. We illustrate the performance of all the 127 combinations in Fig. 5. While more features improve the F-score, the improvement, compared with the selected three features, is only marginal.

Fig. 5: The prediction performance of different feature groups.
Fig. 6: The substitute relationship among features.

To further exploit the substitute structure in the 7 features, we start our analysis on the selected three features set. We first seek to understand the possible substitute for precipitation. With only two features, wind speed and temperature, the SVM model can achieve an F-score of 73.2. Including any one of precipitation, ice-thickness and humidity will increase the F-score dramatically. In particular, from the final performance, precipitation and ice-thickness are prefect substitute features for conductor galloping prediction. This also aligns with our intuition. Since most conductor galloping events happen at the temperature around 0C, in this condition, precipitation and ice-thickness on the overhead power line are closely correlated. In this regard, humidity is also a good substitute of precipitation and ice-thickness. However, including all the three features won’t further improve the F-score dramatically. Figure 6 (a) visualizes this substitute relationship. We can conduct the same analysis to understand the substitute relationship between wind speed and vertical wind speed. We visualize the result in Fig. 6 (b). One interesting observation is that vertical wind speed is believed to be a more important determinant in triggering galloping as suggested by Den Hartog’s model [7]. However, in practice, for a long-distance overhead power line, the vertical wind speed is also not very well defined, which can help us to explain why vertical wind speed plays a weaker role in the SVM prediction model compared with wind speed.

Iv Sampling for Imbalanced Datasets

A rich literature has suggested that the performance of a forecast model trained by limited data is contingent on whether the samples are balanced over features as well as whether they can represent the population’s distribution. This inspires us to conduct sample adjustment to balance the dataset over features for better performance.

Fig. 7: The prediction performance varies according to number of normal samples in the training set.
Fig. 8: The impact of data imbalance on the prediction performance.
Fig. 9: Performance comparison with different data balancing methods. (a) F-score; (b) Recall; (c) Precision.

Iv-a Role of Data Balance

We first investigate the role of data balance in galloping prediction: we examine the prediction performance by constructing a dataset including 2,000 galloping samples and an increasing number of normal samples. Figure 7 plots the evolving performance. In this case, more normal samples in the training data will increase the precision while decrease the recall. The best trade-off illustrated by F-score happens in the data balance point (same amount of galloping and normal samples).

It is interesting to note that too much normal samples in the training set even decreases the performance. We conduct more numerical studies to highlight this observation: for the size of the dataset ranging from 2,000 to 20,000, Fig. 8 investigates the value of data balance for better performance. The peak performance is achieved almost always at the data balance point. Based on Fig. 8, we make a few more observations on the value of dataset volume: given the same imbalanced level, larger volume implies better performance. On the other hand, larger volume dataset is also more robust to data imbalance.

Iv-B Sampling for Data Balance

We focus on the selected set of three features: wind speed, temperature and precipitation. To achieve a better performance, we employ two sample adjustment methods to balance the dataset: under sampling and over sampling.

For under sampling, we drop some normal samples in the dataset to achieve the balance while simultaneously maintaining the distribution of each feature in the normal sample group unchanged. To achieve this goal, we just randomly select a normal sample in current dataset by index and drop it, and repeat this process until the dataset is balanced.

For over sampling, we use the Synthetic Minority Over-sampling Technique (SMOTE) [3] methods to boost the galloping samples. SMOTE is frequently used for balancing imbalanced dataset for machine learning. To over sample the galloping samples (the minority) in the dataset by SMOTE, we first take a real galloping sample from it, and find its nearest galloping neighbors in the feature space. Then we randomly select one of its neighbors and create a sample by a random linear combination of the selected galloping sample and its galloping neighbor. Finally we add this new point with the label of galloping to the current dataset. By repeating this process, we can create many new galloping samples to balance the dataset.

Figure 9 compares the performance of the two methods. Both methods increase recall and decrease precision. For galloping prediction task, over-sampling outperforms the other two rivals, and achieves an F-score of 98.9. This is because in the original dataset, though imbalanced, the galloping samples still accurately represent the true distribution of the galloping population, which allows us to conduct valid over-sampling.

V Conclusion

In this paper, we design the SVM prediction framework for conductor galloping prediction. To improve the prediction performance, we examine the impacts of data balance and data volume on F-score, and we submit that a balanced dataset is vital to achieve a remarkably good performance with limited resources. For the purpose of conductor galloping prediction, numerical studies suggest that over sampling is a good approach to maintaining the data balance.

This work can be extended in many ways. For example, it is important to examine the ability of generalization of our proposed model, as it is generally costly to collect data for extremal events. We also intend to design an adaptive online learning framework for conductor galloping prediction.


  • [1] S. Brin and L. Page (1998) The anatomy of a large-scale hypertextual web search engine. Computer networks and ISDN systems 30 (1-7), pp. 107–117. Cited by: §I-A.
  • [2] G. Chandrashekar and F. Sahin (2014) A survey on feature selection methods. Computers & Electrical Engineering 40 (1), pp. 16–28. Cited by: §I-B.
  • [3] N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer (2002) SMOTE: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, pp. 321–357. Cited by: §IV-B.
  • [4] K. Chen, T. Zhao, M. Yang, L. Liu, A. Tamura, R. Wang, M. Utiyama, and E. Sumita (2018-02) A neural approach to source dependence based context model for statistical machine translation. IEEE/ACM Transactions on Audio, Speech, and Language Processing 26 (2), pp. 266–280. External Links: Document, ISSN Cited by: §I-A.
  • [5] Y. Cheng, J. Han, J. Zhang, and W. Hao (2018-10) Reconstructing the problem of galloping monitoring of traditional complex analytical mechanism into a prediction method for machine learning algorithm modeling. In 2018 IEEE 3rd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Vol. , pp. 2552–2557. External Links: Document, ISSN Cited by: §I-A.
  • [6] China Electric Power News (2018)(Website) External Links: Link Cited by: Fig. 1.
  • [7] J. Den Hartog (1932) Transmission line vibration due to sleet. Transactions of the American Institute of Electrical Engineers 51 (4), pp. 1074–1076. Cited by: §I, §II, §III-B.
  • [8] S.S. Keerthi and C.-J. Lin (2003) Asymptotic behaviors of support vector machines with gaussian kernel. Cited by: §III.
  • [9] Kuan-jun Zhu, Bin Liu, Hai-jun Niu, and Jun-hui Li (2010-10) Statistical analysis and research on galloping characteristics and damage for iced conductors of transmission lines in china. In 2010 International Conference on Power System Technology, Vol. , pp. 1–5. External Links: Document, ISSN Cited by: §I.
  • [10] X. Liu and Z. Zhou (2006-12) The influence of class imbalance on cost-sensitive learning: an empirical study. In Sixth International Conference on Data Mining (ICDM’06), Vol. , pp. 970–974. External Links: Document, ISSN Cited by: §I-B.
  • [11] O. Nigol and G. Clarke (1974) Conductor galloping and control based on torsional mechanism. In IEEE Transactions on Power Apparatus and Systems, pp. 1729–1729. Cited by: §I.
  • [12] A. Sinha and J. Mondal (1999) Dynamic state estimator using ann based bus load prediction. IEEE Transactions on Power Systems 14 (4), pp. 1219–1225. Cited by: §I-A.
  • [13] Y. Tang, Y. Zhang, N. V. Chawla, and S. Krasser (2009-02) SVMs modeling for highly imbalanced classification. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 39 (1), pp. 281–288. External Links: Document, ISSN Cited by: §I-B.
  • [14] G. Wu and E. Y. Chang (2005-06) KBA: kernel boundary alignment considering imbalanced data distribution. IEEE Transactions on Knowledge and Data Engineering 17 (6), pp. 786–795. External Links: Document, ISSN Cited by: §I-B.
  • [15] Y. Yang and X. Liu (1999) A re-examination of text categorization methods. In Proceedings of the 22Nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’99, New York, NY, USA, pp. 42–49. External Links: ISBN 1-58113-096-1, Link, Document Cited by: §III-A.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description