Chance-Constrained Outage Scheduling using a Machine Learning Proxy
Outage scheduling aims at defining, over a horizon of several months to years, when different components needing maintenance should be taken out of operation. Its objective is to minimize operation-cost expectation while satisfying reliability-related constraints. We propose a distributed scenario-based chance-constrained optimization formulation for this problem. To tackle tractability issues arising in large networks, we use machine learning to build a proxy for predicting outcomes of power system operation processes in this context. On the IEEE-RTS79 and IEEE-RTS96 networks, our solution obtains cheaper and more reliable plans than other candidates.
Outage scheduling is performed by transmission system operators (TSOs), as an integral part of asset management, in order to carry out component maintenance and replacement activities . However, scheduling of the required outages for maintenance jobs is a complex task since it must take into account constrained resources (e.g. working crews, hours, and budget), increased vulnerability of the grid to contingencies during outages, and the impact of the scheduled outages on operations. Moreover, outage schedules, which are planned on a mid-term scale of several months to years, must also be robust with respect to uncertainties.
In this work, we present a general framework for assessing the impact of a given outage schedule on expected costs and system reliability, incurred while operating the system during the schedule’s period. In addition, we formulate and solve a stochastic optimization program for optimally scheduling a list of required outages. To do so, we take into account the smaller-horizon decision processes taking place during this time interval. These latter concern day-ahead operational planning and real-time operation.
The complex dependence between the multiple time-horizons and the high uncertainty in the context of mid-term planning renders the corresponding assessment problem challenging. As demonstrated in , solving an extensive amount of unit commitment (UC) problems to mimic short-term decision-making does not scale well to realistic grids, with thousands of buses or more. This is specifically burdensome while simulating trajectories of months to years. To deal with this complexity we propose to use machine learning to design a proxy that approximates short-term decision making, relieving the dependence of mid-term outcome assessment on accurate short-term simulations; ergo, allowing a tractable assessment method. Specifically, we replace exact UC computations with pre-computed UC solutions to problems with similar input conditions. See Section IV for further details on the method.
When planning for future outages to enable maintenance, a certain reliability criterion is attempted to be satisfied at all future times. Nowadays, the common practice among TSOs is to consider the deterministic N-1 reliability criterion, while other probabilistic criteria are also being investigated [3, 4]. To make the system secured months in advance, the asset management operator should ideally assess whether each of the possible future scenarios is secure, by taking into account the coordination with day(s)-ahead and (intra)hourly operation. Since considering all possible realizations of future events is impractical, they must be approximated using sampled paths of future scenarios. In this work, we thus also devise a sampling scheme that richly represents possible occurrences while being tractable. We trust such methods are crucial for high-quality mid-term analysis.
I-a Related Work
Current practice in transmission system asset management offers three main approaches: time-based, condition-based, and reliability-centered preventive maintenance . As for the academic literature, two popular trade-offs are i) increasing equipment reliability via maintenance while minimizing maintenance costs; and ii) minimizing the effect of outages on socio-economic welfare while satisfying operating constraints.
In , the first above trade-off was considered in a two-stage formulation. The first stage schedules mid-term maintenance that imposes conditions on the second stage problem: short-term N-1 secured scheduling. By choosing a maintenance action per each time-interval, Weibull asset failure probability was analytically minimized. In , an analytic objective function was also designed. There, maintenance reduced cumulative risk of events such as overloads and voltage collapses, assuming known year-ahead generation and load profiles. The accumulated gain was negative during the actual maintenance, but positive afterwards due to its failure rate reduction.
As mentioned earlier, coordination with UC and economic dispatch may render the outage schedule assessment intractable, especially under security criteria. To overcome this, a coordination strategy between the different tasks was proposed in . There, mid-term planning over a deterministic 168-hour-long scenario minimized UC scheduling costs under changing network topology. In more recent work , a greedy outage scheduling algorithm used Monte-Carlo simulations to assess the impact of outages on system operation. By mimicking experts heuristics for mid-term outage-scheduling, it enables long-term assessment of system development and maintenance policies.
I-B Our Contribution
Our contribution is three-fold. First, we provide a new probabilistic mathematical framework that accounts for three entities involved in the multiple-horizon decision-making process; these are namely the mid-term, short-term and real-time decision makers. Our model captures their hierarchy and formulates their coordination using an information sharing scheme, that limits each via partial observability.
Second, we introduce the novel concept of integrating a component that instantly predicts approximated short-term decision outcomes; we refer to it as a proxy. We do so with a well-known machine learning algorithm, nearest neighbor. We thus enable a critical reduction in computation time, which turns the table and deems Ö¿large-scale multiple time-horizon simulation-based assessment tractable.
Third, we devise a scenario-based optimization methodology. It builds on our scenario generation approach, Bayesian hierarchical window sampling, which interpolates between sequential trajectory simulation and snapshot sampling while accounting for coordination between the three decision layers; see Fig. 1. Using it, we solve our stochastic chance-constrained outage scheduling formulation for IEEE-RTS79 and IEEE-RTS96 with distributed computing and show promising results.
Ii Mathematical Problem Formulation
In our problem setting, the TSO lists necessary outages , each one defined by a duration, a budget of crew requirements, and a specific component to be taken out of operation for maintenance. A specific component can be required to undergo more than a single outage. The outage scheduling horizon (e.g. several months, or a couple of years) is split into hourly time-steps. The decision variable is a vector of length , with denoting the starting time of outage and where is a given set of candidate outage moments (e.g. monthly steps); we denote by the set of candidate outage plans. We formulate the mid-term stochastic optimization program in (1), where the goal is to minimize expected future operating costs by adopting an optimal planned outage schedule .
This formulation’s components are explained as follows.
The objective in (1a) is the aggregated expected cost of operational decisions summed over the evaluation horizon. The expectation is w.r.t. the distribution of the uncertain future conditions of the grid encased in and denoted by stochastic exogenous scenario . Scenario is a series of states , which are introduced in more detail in Section III-A. When making decisions in the mid-term time-horizon, one must take into account the smaller-horizon decisions that take place during it. In this work, the smaller-horizon decisions considered are short-term (day-ahead) operational planning and real-time control . Each of the sets of candidate smaller-horizon decisions is function of the decisions that are taken higher in the hierarchy.
In our work, a real-time decision defines a vector of redispatch values for each redispatchable generator, wind curtailment values for each wind generator, and load shedding values for each bus, and is determined by minimizing the cost of deviation from the day-ahead market schedule . The latter is determined by minimizing a day-ahead objective . Lastly, we choose the function to only account for the real-time operating costs. Specifically, it is identical to apart for the load shedding cost. This is because load shedding is already addressed via (1c). However, the formulation is general, and may in principle also incorporate real-time load shedding, as well as market surplus and day-ahead reserve purchase costs, if deemed important.
Ii-B Reliability and Load Shedding Chance-Constraints
To maintain the generality and flexibility of our model while respecting its probabilistic framework, we define a reliability metric that is independent of smaller time-horizon specificities. It allows for equitable comparison between different maintenance strategies and operation policies. Inspired by the common N-1 criterion used in the industry, we adapt it to our probabilistic setup. Namely, we consider the system’s ability to withstand any contingency of a single component. We thus define reliability as the portion of contingencies under which the system retains safe operation, which, practically, we measure via AC power flow convergence.
For this, denote by the N-1 contingency list and by the real-time reliability, which for brevity we also denote by . The latter is calculated for given state and is dependent of current topology, dictated by :
where equals if a feasible ACPF solution exists during contingency , and otherwise. Also, let
i.e., the average success rate for scenario .
In similar fashion, let be the total load shed in state , as determined by the real-time decision , and i.e., the average amount of load shed during scenario . Based on the these definitions, the chance-constraints in (1b)-(1c) ensure that the average reliability remains above a minimal value and that the average load shedding remains below a maximal value , with respective probabilities and . Based on reasonable achievable values for our specific test-case modifications listed in Section VI, throughout this work we set
The reason for explicitly incorporating the two chance constraints together is to ensure both high reliability and low load shedding at the same time, as these two obviously trade-off. We further relate to this trade-off in Section VI.
Ii-C Feasibility Constraints
Maintenance feasibility constraints in (1d) define which maintenance schedules are feasible, e.g., cannot maintain more than two assets per month.
Ii-D Coordination with Smaller-Horizon Subproblems
Lastly, the constraints in (1e)-(1f) ensure coordination between mid-term and smaller-horizon decisions. The informational states and appearing as arguments of depict the partial information revealed to the respective decision makers in these time-horizons; further details on the notion of informational states are given in Section III-B.
Iii Probabilistic Model Components
In this section, we elaborate on our probabilistic decision making model. We define a state-space representation encapsulating all exogenous uncertain information and the decision makers’ limited access to this information. Our model is generic and can be adapted for additional uncertain factors.
State captures all exogenous uncertain information at time , required to make informed decisions in all considered time-horizons. Let respectively be the number of transmission lines, buses, dispatchable generators, and wind generators in the network. The state is defined as the following tuple:
is the seasonal weather factor, determining the intensity of demand and wind generation. This variable changes monthly, with values drawn around a mean profile corresponding to typical seasonal trends.
is the day-ahead wind generation forecast, where is the day-ahead planning horizon (24 in our simulations). Notice that variables with subscript ’da’ remain fixed for time periods of length , and are updated each time-steps.
is the day-ahead load forecast.
is the realized wind generation at time-step . It is assumed fixed during the intra-day interval (1 hour).
is the realized load at time-step .
is the network transmission line topology at time-step , as determined by exogenous events. Entry indicates line is offline at time due to a random forced outage.
Iii-B Informational States
Decision makers with different time-horizons are exposed to different degrees of information; i.e., the higher the decision’s temporal resolution, the more state variables are realized at the time of the decision. We formulate these granularities via informational states as follows. Denote to be ’s sub-vector containing entries to . Let and ; these are respectively the short-term and real-time informational states. When performing his decision, the short-term planner is exposed to which carries the realizations of the day-ahead generation and load forecasts. Notice he is also exposed to the higher-level mid-term decision ; however, we do not model it as a part of the informational state as it is not exogenous. As for the real-time operator, he is exposed to realized values of all state entries, i.e. , and is similarly informed of the higher level decisions and .
For completeness, we also define the mid-term informational state even though it does not appear directly in (1) (it does appear later for scenario generation purposes). Notice that in our work is an open-loop mid-term strategy, so that we do not use to revise mid-term decisions.
Iii-C Smaller-horizon Formulations
Our formulation contains three hierarchical levels of decision making, namely mid-term outage scheduling, short-term day(s)-ahead planning, and (intra)hourly real-time control. We often refer to the short-term and real-time problems as the inner subproblems. We now present the candidate decisions in these latter.
Iii-C1 Short-term Formulation
which also appears in (1e), is set in this work to be UC. As explained in Section II, we choose the cost in (1a) to be solely based on real-time realizations and decisions. Thus, the UC cost here is not to be minimized by the mid-term planner; rather, the UC solution is plugged into the real-time problem for setting commitment constraints and redispatch costs reference. Notice the UC formulation depends on day-ahead forecasts of wind power and load . These are parts of the informational state , to which the decision maker is exposed when facing his day-ahead planning decision. The feasible action-space in (3) depends on the topology set by the mid-term decision , and it may also embody a reliability criterion of choice, e.g. N-0 or N-1.
Iii-C2 Real-time Formulation
which also appears in (1f), follows the UC solution in (3) as a baseline. It is solved sequentially for each hour, where each solution at time is fed to the next one at time to incorporate temporal constraints. Similarly as in (3), a reliability criterion of choice may be ensured via the definition of the set
Iii-D Scenario Generation
To solve (1) in the face of exogenous uncertainties and the intricate interaction between these uncertainties, we rely in this work on scenario-based simulation . Existing literature on scenario generation splits into two main categories. The first is full-trajectory simulation , where (intra)hourly developments are simulated as a single long sequence. In our mid-term problem that spans over a whole year, such an approach will result in high variance and possibly necessitate an intractable number of samples to produce a decent evaluation of scenario costs. The second category of approaches is based on snapshot sampling of static future moments . The main issue with this methodology is the loss of temporal information.
In light of this, we introduce a new scenario generation approach, Bayesian hierarchical window scenario sampling, which is a hybrid of the two aforementioned methodologies, aimed at mitigating the disadvantages of each of them. Visualized in Fig. 2, our sampling process begins with drawing monthly parameters for wind and load intensity, i.e., drawing sequence from transition probability where is a monthly time index. Then, we draw replicas of 111The choice of the window lengths and controls the level of interpolation between the completely sequential scenario sampling of all time steps, and the alternative completely static approach of solely sampling snapshots of states, with no temporal relation between them. Essentially, they arbitrate between the bias and variance of the sampling process. Full trajectory sampling has low bias but high variance, while static snapshot sampling lowers the variance, though it introduces bias due to its simplicity and choice of times of snapshots. consecutive days; this results in sequences drawn from transition probability where is a daily time index. Lastly, per each such day, we draw replicas of consecutive hours and form sequences from transition probability . Having realizations of day-ahead forecasts in and their corresponding hourly realizations in , we can respectively solve the daily and hourly inner subproblems. Based on the incurred costs, we are able to evaluate the scenarios’ accumulated costs per each month in a parallel fashion.
In supplementary material  we provide more details on the probabilistic models used in this sampling approach.
Iv Machine Learning for a Short-Term Proxy
As mentioned earlier, in this work we utilize machine learning to build a short-term proxy. Thus, we replace exact solutions of the multiple UC problem instances, originating in (1e), with their quickly-predicted approximations. We use a well-known machine learning algorithm: nearest neighbor classification ; we thus call it UCNN. The methodology relies on a simple concept: creating a large and diverse data-set that contains samples of the environment and grid conditions along with their respective UC solutions. Then, during the assessment of an outage schedule, instead of solving the multiple UC problem instances required to simulate decisions taken, we simply choose among the already pre-computed UC solutions. The solution used is the one with the closest conditions to the environment and grid conditions of the current UC problem needed to be solved; hence the phrase nearest neighbor. We note that to confidently obtain high-quality approximate solutions, we generate the data-set so as to cover all relevant topologies that might be encountered during prediction. In our context, this implies a data-set that is , where is the set of components for which outages ought to be performed. In general, this combinatorial dependence is not necessarily compulsory. Nevertheless, the question of accuracy degradation with smaller data-sets and more efficient data-set compositions are subject to future work.
The method’s strength stems from the fact that during the optimization process (1), which is based on multiple outage schedule assessments, UC problem instances are often similar to previously computed ones. Therefore, instead of repeating the expensive process of obtaining these solutions during the assessment of a single scenario and across different scenarios, we utilize samples created ex-ante as representatives of similar conditions. The initial creation of the dataset is a slow process which can either be done offline or online, i.e., by continually adding new solutions to the dataset as they become available. For the experiment described in this section, a dataset of UC problem instances was created. After obtaining this initial dataset, UCNN reduces computation time in several orders of magnitude, with relatively little compromise in quality . A diagram visualizing the method is given in Fig. 3.
In addition to the direct UC approximation comparison in , we examine the resulting accuracy of outage scheduling assessment when using UCNN to solve the short-term subproblem instead of exact UC computations. To do so, we generate four arbitrary outage schedules under the configuration given in Section VI. Then, for each of these schedules, we present in Fig. 4 means and standard deviations of several metrics in our simulation across the year’s progress. These metrics are i) day-ahead operational costs and ii) load shedding amounts, taken from the short-term UC simulation. Additionally, they include the real-time values of iii) reliability as defined in (2) and vi) real-time operational costs. In all of these plots, the red curve is of an empty, no-outage schedule given as a baseline, evaluated using exact UC simulation; the blue and green curves are respectively based on exact UC and UCNN simulations of the arbitrary outage schedules. The persistent proximity of the green curve to the blue demonstrates the low approximation error when using UCNN, as it propagates to the four inspected metrics during the simulation.
V Distributed Simulation-Based Cross Entropy
Problem (1) is a non-convex combinatorial stochastic optimization program, with inner mixed integer-linear programs (MILP). As such, it is too complex for the objective and constraints to be expressed in explicit analytic form and for the program to be solved using gradient-based optimization. Deriving the gradient estimator is a challenging problem and requires large quantities of generated data . Furthermore, gradient-based approaches would preclude the option of utilizing smaller-horizon machine learning proxies such as our UCNN. For this reason, we choose a gradient-free simulation-based optimization approach. It is performed with distributed Monte-Carlo sampling, where multiple solutions in are being assessed in parallel on multiple servers. Each month of such solution assessment is itself simulated in parallel.
Our algorithm of choice is Cross Entropy (CE) . It is a randomized percentile optimization method that has shown to be useful for solving power system combinatorial problems . In CE, some parametric distribution is maintained over the solution space. It iteratively performs consecutive steps of i) drawing multiple candidate solutions (outage schedules) and evaluating their objective value, followed by ii) updating the parametric solution distribution based on the top percentile of samples. More details on our CE implementation are given in the supplementary material . Constraint satisfaction is ensured in the following way: Chance-Constraints (1b)-(1c) are evaluated empirically and their violation is penalized with increasing-slope barrier functions; Feasibility Constraint (1d) is ensured via the structure of the CE parametric distribution; Inner Constraints (1e)-(1f) are ensured via the solvers used for simulating them.
We run our experiments on a Sun cluster with Intel Xeon(R) CPUs GHz, containing a total of 300 cores, each with 2GB of memory. All code is written in Matlab . In each iteration of the CE algorithm, we assess the objective and constraint values of drawn outage schedules in parallel, while also parallelizing the simulation of each of the months. The simulation parameters introduced in Section III-D, depicting daily and hourly trajectory length and multiplicity, are . This totals a year-long trajectory which is sampled times.
Vi-a Model Assumptions
In our experiments we set the horizon in (1) to be one year, i.e., We consider transmission line outages with monthly candidate outage moments, i.e. where the duration of each outage is one month.
As for the inner subproblems, the UC formulation in (1e) is implemented as a multi-stage DCOPF constituting a MILP, where the objective and decision variable compose of generation, start-up, wind-curtailment, and load shedding. The real-time formulation in (1f) is implemented similarly as an hourly DCOPF, where the objective and decision variable compose of redispatch, wind-curtailment, load shedding and re-commitment w.r.t. to the solution of (1e). For both short-term and real-time, the reliability criterion used is N-0; i.e., no contingency list is considered at the subproblem level. Instead, our probabilistic notion of N-1 resiliency is ensured via (1b). The rigorous formulations of subproblems (1e)-(1f) are given in the supplementary material . We model these optimizations with YALMIP  and solve with CPLEX .
Vi-B Test-Cases and Outages
In our simulation, we consider the IEEE-RTS79 and IEEE-RTS96 test-cases . We adopt updated generator parameters from Kirschen et al. , namely their capacities, min-output, ramp up/down limits, min up/down times, price curves and start-up costs. Peak loads and hourly demand means are based on data from the US published in . Capacities and means of hourly wind generation are also based on real data, taken from . Based on these values, both demand and wind processes are sampled from a multivariate normal distribution with standard deviation that is a fixed fraction of the mean – for demand and for wind. Moreover, a monthly trend in demand and wind is governing the mean profiles . Value of lost load is set to , taken from  and wind-curtailment price is set to , taken from .
Additionally, we slightly modify the test-cases so as to create several ’bottleneck’ areas to provide conditions for variant outage schedule qualities. In RTS79, these modifications include i) removal of transmission line between bus 1 and 2, and ii) shift of loads from buses 1 and 2 to buses 3 and 4, respectively. In the case of RTS96, the same exact modifications are replicated to all three zones.
Next, we specify the outage lists. For RTS79, it is composed of outages: outages per each of lines and outage for line For RTS96, the list is composed of outages: per each zone plus for the interconnections. Specifically, in the first zone of RTS96 we have outages per each of lines and outage for line these are replicated similarly to the equivalent lines in the second and third zones. The additional interconnection outages are per each of lines The test-case modifications and outages are visualized in Fig. 5.
As explained in Section IV, the UCNN proxy data-set size is linearly dependent on the outage combinations set. This implies a huge data-set that is given the above-listed outages. To tackle this, in this work, when scheduling outages for RTS96, we assume that the year is partitioned into three periods of 4 months, and each of the three “zone operators” is exclusively allocated with distinct months to conduct his outages. This is enforced via the feasibility constraint (1d). As for the outages of the additional interconnections, those are independent and free to be chosen to any of the year’s months. We thereby do away with the exponential dependence of UCNN’s dataset complexity in the number of zones, i.e., reduce the training set size to .
Fig. 6 exhibits the fast convergence as expected from CE when solving (1) for the two test-cases, along with intriguing differences between them. It gives the median with upper and lower quartiles of the top CE percentile for three metrics: operational costs from (1a) (redispatch, wind curtailment and unit re-commitment), average reliability from (1b), and average load shedding from (1c). In both test-cases, operational costs significantly drop. As for the reliability and load shedding, a somewhat complementary behavior is observed for the two test-cases. The reliability in RTS79 starts off with high enough values, , to satisfy its constraint (1b), while the load shedding amount starts high and quickly drops to a satisfying level of . The exact opposite happens for RTS96: reliability starts low and increases drastically to , while load shedding values consistently remain low throughout the optimization process, stabilizing at .
We also visualize the convergence in the space of outage schedules in Fig. 7 via gray-level-mapped matrices. The rows of these matrices denote the transmission lines and their columns denote the outage moments. For a given entry, the gray-level corresponds to the relative intensity of outages selected for the specific line-month combination. The initial CE iteration begins with uniformly-drawn candidate outages moments, followed by convergence towards a single solution. In the case of RTS96, the zonal time-allocation can be seen in the form of three shaded blocks of entries, with the three interconnection outages in the form of three shaded independent rows.
To demonstrate the efficacy of the optimal outage schedule, we compare it with arbitrary candidate outage schedules for RTS96. We do so by generating random schedules that comply with the zonal time allocation, and evaluating them as described throughout this work. We then perform additional evaluations of our solution to (1). Fig. 8(a) displays operational cost, reliability, and load shedding histograms of the evaluated schedules. Our optimization solution consistently exhibits the lowest operational costs, highest reliability, and lowest load shedding. One exception to its optimality is a single random schedule that achieves reliability greater than . Therefore, we also include a scatter plot in Fig. 8(b) to capture the reliability vs. load shedding tradeoff. It reveals that the aforementioned highly-reliable random schedule suffers from high load-shedding values, as opposed to our optimization solution.
The power system infrastructure is ageing, and its maintenance is becoming more and more costly and complex. This calls for new sophisticated outage scheduling tools, that will handle uncertainty and coordination with operations. The scenario assessment framework introduced in this work enables detailed evaluation of implicit intricate implications an outage schedule inflicts on a power system. We harness the power of machine learning and distributed computing to tractably perform multiple schedule assessments in parallel. We also wrap it in an optimization framework that finds convincingly high-quality schedules. An additional, straightforward application of the methodologies introduced here is assessment of a predefined maintenance schedule considered by experts.
The focus of this work is in the probabilistic framework and hierarchical methodologies. Nevertheless, we believe it enables gaining new insights for both academic networks and more complex real-world test-cases.
The proposed framework is flexible and can be adapted to different practical cost functions and reliability criteria.
-  GARPUR Consortium, “D5.1, functional analysis of asset management processes,” http://www.garpur-project.eu/deliverables, Oct. 2015.
-  G. Dalal, E. Gilboa, and S. Mannor, “Distributed scenario-based optimization for asset management in a hierarchical decision making environment,” in Power Systems Computation Conference, 2016.
-  E. Karangelos and L. Wehenkel, “Probabilistic reliability management approach and criteria for power system real-time operation,” . To appear In Proc. of Power Systems Computation Conference (PSCC 2016), 2016. [Online]. Available: http://hdl.handle.net/2268/193403
-  GARPUR Consortium, “D2.2, Guidelines for implementing the new reliability assessment and optimization methodology,” Available: http://www.garpur-project.eu/deliverables, Oct. 2016.
-  ——, “D5.2, Pathways for mid-term and long-term asset management,” Available: http://www.garpur-project.eu/deliverables, Oct. 2016.
-  A. Abiri-Jahromi, M. Parvania, F. Bouffard, and M. Fotuhi-Firuzabad, “A two-stage framework for power transformer asset maintenance managementâpart i: Models and formulations,” IEEE Transactions on Power Systems, vol. 28, no. 2, pp. 1395–1403, 2013.
-  Y. Jiang, Z. Zhong, J. McCalley, and T. Voorhis, “Risk-based maintenance optimization for transmission equipment,” in Proc. of 12th Annual Substations Equipment Diagnostics Conference, 2004.
-  Y. Fu, M. Shahidehpour, and Z. Li, “Security-constrained optimal coordination of generation and transmission maintenance outage scheduling,” IEEE Trans. on Power Systems, vol. 22, no. 3, pp. 1302–1313, 2007.
-  M. Marin, E. Karangelos, and L. Wehenkel, “A computational model of mid-term outage scheduling for long-term system studies,” in IEEE PowerTech. IEEE, 2017, pp. 1–6.
-  R. S. Dembo, “Scenario optimization,” Annals of Operations Research, vol. 30, no. 1, pp. 63–80, 1991.
-  S. Pineda, J. M. Morales, and T. K. Boomsma, “Impact of forecast errors on expansion planning of power systems with a renewables target,” European Journal of Operational Research, 2016.
-  G. Dalal, E. Gilboa, S. Mannor, and L. Wehenkel, “Supplementary material for: Chance-constrained outage scheduling using a machine learning proxy,” arXiv preprint arXiv:submit/111111, 2017.
-  T. Cover and P. Hart, “Nearest neighbor pattern classification,” IEEE transactions on information theory, vol. 13, no. 1, pp. 21–27, 1967.
-  G. Dalal, E. Gilboa, S. Mannor, and L. Wehenkel, “Unit commitment using nearest neighbor as a short-term proxy,” arXiv preprint arXiv:1611.10215, 2016.
-  Y. Nesterov and V. Spokoiny, “Random gradient-free minimization of convex functions,” Foundations of Computational Mathematics, vol. 17, no. 2, pp. 527–566, 2017.
-  P.-T. De Boer, D. P. Kroese, S. Mannor, and R. Y. Rubinstein, “A tutorial on the cross-entropy method,” Annals of operations research, vol. 134, no. 1, pp. 19–67, 2005.
-  D. Ernst, M. Glavic, G.-B. Stan, S. Mannor, and L. Wehenkel, “The cross-entropy method for power system combinatorial optimization problems,” in 2007 Power Tech, 2007.
-  “The language of technical computing,” http://www.matlab.com.
-  J. LÃ¶fberg, “Yalmip : A toolbox for modeling and optimization in MATLAB,” in Proceedings of the CACSD Conference, Taipei, Taiwan, 2004. [Online]. Available: http://users.isy.liu.se/johanl/yalmip
-  “IBM ILOG CPLEX Optimizer,” http://www-01.ibm.com/software/integration/optimization/cplex-optimizer/, 2010.
-  P. Subcommittee, “Ieee reliability test system,” IEEE Transactions on Power Apparatus and Systems, pp. 2047–2054, 1979.
-  H. Pandzic, T. Qiu, and D. S. Kirschen, “Comparison of state-of-the-art transmission constrained unit commitment formulations,” in Power and Energy Society General Meeting (PES), 2013 IEEE, 2013.
-  E. E. D. University of Washington, “Renewable energy analysis lab,” http://www.ee.washington.edu/research/real/library.html.
-  A. Ouammi, H. Dagdougui, R. Sacile, and A. Mimet, “Monthly and seasonal assessment of wind energy characteristics at four monitored locations in liguria region (italy),” Renewable and Sustainable Energy Reviews, vol. 14, no. 7, pp. 1959–1968, 2010.
-  Y. Dvorkin, H. Pandzic, M. Ortega-Vazquez, D. S. Kirschen et al., “A hybrid stochastic/interval approach to transmission-constrained unit commitment,” Power Systems, IEEE Transactions on, 2015.
-  R. Loisel, A. Mercier, C. Gatzen, N. Elms, and H. Petric, “Valuation framework for large scale electricity storage in a case with wind curtailment,” Energy Policy, vol. 38, no. 11, pp. 7323–7337, 2010.
-  R. D. Zimmerman, C. E. Murillo-Sánchez, and R. J. Thomas, “Matpower: Steady-state operations, planning, and analysis tools for power systems research and education,” Power Systems, IEEE Transactions on, vol. 26, no. 1, pp. 12–19, 2011.
-  K. Klink, “Trends in mean monthly maximum and minimum surface wind speeds in the coterminous united states, 1961 to 1990,” Climate Research, vol. 13, no. 3, pp. 193–205, 1999.
Gal Dalal obtained his BSc in 2013 in electrical engineering from Technion, Israel. Since then he has been a PhD student in Technion. His research interests lie on the intersection between power systems and probabilistic optimization methods, machine and reinforcement learning.
Elad Gilboa received a B.A in electrical and computer engineering and M.E. in biomedical engineering respectively in 2004 and 2009, from Technion, Israel, and a PhD in electrical engineering in 2014 from Washington University in St. Louis, USA. After which he did postdoctoral work in the Machine Learning Group at the Technion. His research includes machine learning and statistical single processing, with focus on making these methods scalable to real world problems.
Shie Mannor received the B.Sc. degree in electrical engineering, the B.A. degree in mathematics, and the Ph.D. degree in electrical engineering from the Technion-Israel Institute of Technology, Haifa, Israel, in 1996, 1996, and 2002, respectively. From 2002 to 2004, he was a Fulbright scholar and a postdoctoral associate at M.I.T. He was with the Department of Electrical and Computer Engineering at McGill University from 2004 to 2010 where he was the Canada Research chair in Machine Learning. He has been with the Faculty of Electrical Engineering at the Technion since 2008 where he is currently a professor. His research interests include machine learning and pattern recognition, planning and control, multi-agent systems, and communications.
Louis Wehenkel graduated in Electrical Engineering (Electronics) in 1986 and received the Ph.D. degree in 1990, both from the University of Liège, where he is full Professor of Electrical Engineering and Computer Science. His research interests lie in the fields of stochastic methods for systems control, optimization, machine learning and data mining, with applications in complex systems, in particular large scale power systems planning, operation and control, industrial process control, bioinformatics and computer vision.
Appendix A Short-term and Real-time Mathematical Formulations
In the case of DC power flow, voltage magnitudes and reactive powers are eliminated from the problem, and real power flows are modeled as linear functions of the voltage angles . This results in a mixed integer-linear program (MILP) that can be solved efficiently using commercial solvers . The full unit-commitment formulation is the following.
Formulation (5) generally supports ensuring the N-1 security criterion via (5l). Specifically, in our simulations we only considered the N-0 case as noted in the paper body, which is obtained by replacing (5l) with The formulation’s components are explained as follows.
denotes index of a transmission line that is offline. denotes all lines are connected and online. lines undergoing an outage are excluded from
denotes commitment (on/off) status of all dispatchable generators at all time-steps.
denotes voltage angle vectors for the N-1 network layouts at all time steps.
denote dispatchable generation, wind curtailment and load shedding decision vectors, with being their corresponding prices.
denote minimal up and down time limits for generator , after it had been off/on for / the latter are functions of and , as depicted in
denotes start-up cost of dispatchable generator after it had been off for time-steps.
denotes the overall power balance equation for line being offline.
denote nodal real power injection linear coefficients.
denote linear coefficients of the branch flows at the from ends of each branch (equal minus of the to ends, due to the lossless assumption).
denotes a vector of real power consumed by shunt elements.
denotes generator-to-bus connection matrix, where denotes the dot-product of the two vectors.
denotes line flow limits.
denotes the set of indices of reference buses, with being the reference voltage angle.
denote minimal and maximal power outputs of generator .
As for the real-time optimization problem, it results in a formulation similar to the operational planning formulation in (5). However, it involves the following adaptations:
it is individually solved per each time hour, instead of a whole day-ahead horizon ;
the on/off commitment schedule is no longer a decision variable, rather it is retrieved from and is set as a constraint in real-time operation;
wind power and load forecasts for the time-steps are replaced with their actual realizations ;
a symmetric re-dispatch cost is added to the objective: . It compares generation planned in the day-ahead plan and the actual realized power consumed in real-time .
Having as an input the full realized state (either by witnessing it in real time, or by sampling future realizations), we solve the real-time decision problem and obtain the voltage magnitude and angle at all network nodes. It can then be potentially used to evaluate related phenomenas, such as aggregated stress effect on equipment failure. However, we leave such considerations to future research.
Appendix B Assumed Stochastic Processes
The Bayesian factorization approach, on which our scenario sampling methodology relies, follows the following decomposition of the probability of state
where the time index was stripped away for brevity.
Notice that since each of the real-time and short-term processes are conditioned on higher levels in the hierarchy, the state sequence is a stationary Markov process; i.e.,
where is a stationary state transition probability.
The stochastic processes in the system are the daily and hourly wind power produced in the wind generators and , respectively; the daily and hourly load process and , respectively; and the topology of the network , used for the generation of training and test sets of the UCNN algorithm. We now provide details on the models used for these stochastic components, along with the data and test-cases they are based on.
B-1 Wind power distribution
where is the daily wind mean profile at time-of-day , and is the monthly wind profile relative to its peak at month of the year.
Daily wind generation process is multivariate normal:
where is a constant () that multiplies the mean , to obtain a standard deviation that is a fixed fraction of the mean. is a square diagonal matrix, with the elements of as its diagonal, assuming wind generators to be uncorrelated. is truncated to stay in the range between and the generator’s capacity.
Hourly wind generation is assumed to be a biased random walk, with expectation ; i.e., the real-time wind process is following the daily forecast up to some accumulated prediction error:
where is Gaussian noise, distributed
B-2 Load distribution
Daily load is assumed to follow a distribution similar to the daily wind distribution , with the same formula containing peak loads and daily profiles for each bus with values taken from . Fraction of mean for standard deviation is set to be .
B-3 Outage distribution
Generation of the training and test sets for UCNN proxy involves sampling UC inputs from distribution . Our sampling technique is based on the following factorization of the random vector : daily wind power and daily load are statistically independent conditioned on the month of the year (which is drawn uniformly first), whereas daily network topology is independent of them both. Each independent component is thus sampled from its marginal distribution. Section VI lists the choice of transmission lines where outages are considered. Sampling of daily topology is done uniformly out of the combinatorial outage set.
Appendix C Distributed Cross Entropy Optimization
We solve our outage scheduling formulation using the Cross Entropy (CE) optimization meta-heuristic . It is a randomized percentile optimization method that has shown to be useful for solving power system combinatorial problems . In CE, a parametric distribution is maintained over the solution space . Per each iteration until convergence of , it performs consecutive steps of
drawing candidate outage schedules and evaluating their respective costs in parallel, based on simulated sampled scenario set (constructed with our Bayesian hierarchical window scenario sampler and decomposed at the monthly level):
updating the parametric solution distribution based on the cheapest -percentile of
This iterative process is visualized in Fig. 9.
Specifically, in our experiments the outage scheduling horizon is one year with monthly candidate outage moments. Therefore, is a binary matrix; i.e., where is the set of transmission lines for which outages ought to be performed. Entry indicates a scheduled outage of line during month .
We thus represent the CE distribution with a matrix of size whose entries depict outage likelihood. At iteration these are all initialized to . As explained in the experiments section, according to the outage lists for IEEE-RTS79 and IEEE-RTS96 we need to schedule either or outages per each line in , depending on the line. Thus, per each row (corresponding to line ) we respectively draw or entries out of the candidate entries. This per-row sampling is performed by drawing one out of the or alternatively entry combinations based on their proportional probability, calculated using matrix . The first step of the CE algorithm is thus to iterate the above procedure times for sampling .
The second step of the CE algorithm is to update as follows. Let be the set of indices of the cheapest -percentile of costs Then, we update the matrix i.e., the entries of the matrix representing are set to be the average of the top solutions’ entries.
Lastly, our criterion for convergence is when the entropy of drops below some small This occurs when all entries are sufficiently close to either or .