^{1}

^{*}

^{1}

Modern electric power grids face a variety of new challenges and there is an urgent need to improve grid resilience more than ever before. The best approach would be to focus primarily on the grid intelligence rather than implementing redundant preventive measures. This paper presents the foundation for an intelligent operational strategy so as to enable the grid to assess its current dynamic state instantaneously. Traditional forms of real-time power system security assessment consist mainly of methods based on power flow analyses and hence, are static in nature. For dynamic security assessment, it is necessary to carry out time-domain simulations (TDS) that are computationally too involved to be performed in real-time. The paper employs machine learning (ML) techniques for real-time assessment of grid resiliency. ML techniques have the capability to organize large amounts of data gathered from such time-domain simulations and thereby extract useful information in order to better assess the system security instantaneously. Further, this paper develops an approach to show that a few operating points of the system called as landmark points contain enough information to capture the nonlinear dynamics present in the system. The proposed approach shows improvement in comparison to the case without landmark points.

In the wake of new vulnerabilities such as those arising from severe weather events and cyber-attacks, current electric grids can no longer be allowed to operate as they did in the past. It is becoming increasingly difficult to analyze different combinations of contingencies under changing scenarios. Grid resilience and improved situational awareness will form the basis of future electric grids in order to tackle these new challenges. The most cost effective way to meet such stringent requirements is through intelligent operation of the grid by employing data driven models that are both informational and analytical in nature. The key attribute involved here is the ability to assess the current state of the power system in real-time in terms of its security. Power system security is defined as its ability to survive imminent disturbances (contingencies) without interruption of customer service. Historically, it has been recognized that for a power system to be secure, it must be stable against all types of disturbances [

Security in terms of operational requirements implies that following a sudden disturbance, power system would be secure if and only if: 1) it could survive the transient swings and reach an acceptable steady state condition, and 2) there are no limit violations in the new steady state condition. The first requirement can be met by carrying out time-domain simulations in order to investigate the instability phenomena such as loss of synchronism or voltage collapse in the post-contingency transient phase. The second requirement is met by using power- flow based methods in order to assess the new steady state condition for voltage and current limit violations.

Time-domain simulations (TDS) are computationally involved and too complex to be performed in real-time. Therefore, for many years in the past, the electric utility industry’s framework for real-time security assessment mainly consisted of solution methods that would meet only the second requirement stated earlier. Such a type of real-time security analysis is prevalent even today and is commonly referred to as “Static Security Assessment (SSA)”. On the other hand, a “Dynamic Security Assessment (DSA)” procedure would strive to meet both the requirements (as stated earlier) in real-time in order to assess power system security.

Different forms of DSA practices have existed in North America since the late 1980s [

Machine learning (ML) techniques have the ability to assimilate and reason with knowledge the way human brain does. Such techniques are primarily driven by data that could be in the form of various power system parameters such as [

This paper presents a framework that would enable implementation of such powerful machine learning techniques for real-time assessment of grid resilience. A standard IEEE 14-bus system is used in this paper for simulation purposes [

Static security assessment (SSA) provides a mathematical framework to compute stability limits for individual buses and lines based on power flow based methods. This involves checking for steady state voltage violations at every bus in the system. Power-Voltage (PV) curves are plotted for each bus by systematically loading the base case of the power system under consideration. This is achieved by means of an algorithm called as “Continuation Power Flow (CPF)” [

CPF is a “case worsening” procedure where the power system is loaded in steps as follows:

where P_{G}_{0}, P_{L}_{0}, Q_{L}_{0} are the base case generator and load powers (in per-unit) and λ is the loading parameter (in per-unit). CPF facilitates plotting of voltage curves as a function of loading parameter λ, for each bus.

As stated earlier, such a framework can be used to generate a dataset consisting of multiple steady-state operating points. For an n-bus system, every such operating point can be represented by a feature vector x of dimension 2n consisting of n bus voltages and n bus angles as features. A set S containing such objects is given by,

where V_{i}’s are bus voltages (in per unit) and δ_{i}’s are bus angles (in degrees).

SSA is performed on the standard IEEE 14-bus system for the following voltage stability criteria at each bus: V_{max} = 1.2 pu and V_{min} = 0.8 pu. Generators are represented by machine models along with automatic voltage regulators and turbine governors. A CPF routine is performed for each line outage of this power system. Thus, a maximum loading parameter λ_{maxi} is calculated for each line outage i, taking voltage stability criteria into account. The set represented by Equation (2) is generated only for values of λ given by,

It has to be noted that these λ_{maxi} values account for only steady-state voltage violations and hence, do not provide any information about dynamic system security. In order to account for dynamic stability, time-domain simulations are performed for each operating point, as described in the next section. All routines are carried out using the PSAT toolbox for Matlab [

The goal of a DSA is to classify different cases based on their dynamic security severity. Dynamic security depends on the time responses of various system variables for the contingency under consideration. As mentioned earlier, it is not possible to perform computationally intensive time-domain simulations in real-time. Nonetheless, machine learning techniques have the ability to extract information from offline time-domain simulations. Subsequently, such useful information can be used to predict dynamic system security for new configurations in order to avoid lengthy time-domain simulations. To implement such an application, detailed time-domain simulations are required to be conducted for different operating points. Thus, a database, on which ML techniques can operate, needs to be generated in offline mode.

The database is generated in the form of a feature matrix X and an output vector y. Each row of the feature matrix X represents a steady state operating point in the form of object _{max} = 1.2 pu and V_{min} = 0.8 pu).

Essentially, DSA is a mapping between each object x and its resiliency against the contingency under consideration, expressed by function f such that,

The next section describes the application of machine learning techniques in order to arrive at this unknown function f.

Machine learning techniques can be applied to the database as generated in the previous section in the form of feature matrix X (size m × 2n) and output vector y (size m × 1). Each row i of matrix X is in the form of object ^{th} training example: x^{(i)}. Similarly, the i^{th}

row from vector y represents the output of the i^{th} training example and is represented by a bit y^{(i)} (either 0 or 1). Therefore, we have,

x^{(i)} = i^{th} training example

y^{(i)} = output (stability) of the i^{th} training example

For “2n” features and “m” training examples, matrix X and vector y are given as follows,

Next, a prediction/hypothesis function h in terms of parameter vector q (column vector) of size 2n is proposed as follows,

where x is any training example vector and g depends on the machine learning algorithm being employed.

The cost function J for machine learning algorithms is generally of the form [

The above cost function is the mean of the sum of squared errors in predicting the outputs of m training examples. Such a cost function can be minimized by using analytical method or batch gradient descent method. The optimal parameter vector q thus derived can be used for predicting the stability of future cases in real-time.

The problem presented in this paper is to classify a TDS as stable (1) or unstable (0). For such classification problems, logistic regression can be used, in which case functions g and J are given as follows [

and

The function g(z) given in equation (8) is a sigmoid function and its value lies between 0 and 1. For classification purposes, TDS cases for which g(z) is greater than 0.5 can be considered as stable and the rest as unstable. At this point, it should be noted that function h given in equation (6) approximates the unknown function f of the previous section, when the parameter q is optimal. The approximated function f_{apprx} can be given by,

In order to test the algorithm, the 14-bus dataset represented by matrix X and vector y (as generated in the previous section) can be divided into a training set (75%) and a test set (25%), which is a normal practice in ML domain. We may also delete the constant feature columns from X such as those containing PV bus voltages and reference angles, since such constant feature values do not add any valuable information. Therefore, an original matrix X with 22 columns (features) is used in this paper. _{apprx} given in Equation (10) and it is calculated as follows,

The next section of this paper introduces the concept of “landmark points” and “linear kernel”. Further, this paper presents a strategy to select best landmark points in order to improve the prediction accuracy.

The concept of selecting landmark points gains importance from the fact that a few training examples may contain the most relevant information about the inherent dynamics present in the dataset [

In order to demonstrate the effectiveness of this concept, L number of landmark points are drawn at random from the rows of matrix X and then, every (training example, landmark) pair is compared using a linear kernel [^{(i)} and landmark l^{(j)}using the dot product and is given by,

Similarity is calculated between all training examples i: 1 < i < m and landmark points j: 1 < j < L. The original feature matrix X (size m × 2n) gets transformed into a new matrix

As shown in

Choosing the most appropriate set of landmark points for a given dataset is not an easy task. In this section, the k-means algorithm is used to derive better landmark points as compared to the random ones selected in the previous section [

In an attempt to find the best landmark points, the original matrix X is divided into 2 matrices X_{stable} and X_{unstable} consisting of only stable and unstable cases respectively. Using k-means, a total number of L centroids are generated for each of these matrices separately and again using linear kernel, two new matrices

The strategy for selecting best landmark points can be stated as follows,

・ Select L random examples from original matrix X as landmarks and generate

・ Select L centroids from original matrix X as landmarks and generate

・ Select L centroids from X_{stable} as landmarks and generate

・ Select L centroids from X_{unstable} as landmarks and generate X’_{unstable}

・ Plot learning curves using

・ Compare the training and test set errors and select the best L landmarks

The ability to assess the current state of the power system instantaneously is the key attribute needed for enhanced grid resilience. Electric power entities carry out large number of offline studies on power system models of different sizes, thus generating tons of data. Machine learning techniques can be employed to use such huge databases in order to learn the inherent non-linear relationships that exist among different power system parameters. Such useful information can be later used online for real-time security analysis.

This paper presents a framework to apply machine learning techniques for real-time assessment of the grid resilience against any contingency with respect to its static and dynamic stability using offline databases. Further, this paper demonstrates a strategy to select best landmark points in order to improve prediction accuracy without compromising computational efficiency. Moreover, ML algorithms are easily scalable and hence, the proposed approach can be extended for analyzing grid resilience against multiple contingencies. Metrics for grid resilience can be developed based on such multi-contingency analyses. With large-scale penetration of renewable energy

in to the current grid and emergence of microgrids, future grid applications would require real-time training in order to extract useful information on a continuous basis. Machine learning techniques can accommodate such complex requirements posed by the continually changing electric grid and hence, would definitely play an important role in realizing next-gen real-time applications.

This work was supported by the OSU Engineering Energy Laboratory and the PSO/Albrecht Naeter Professorship in the School of Electrical and Computer Engineering.

NavinShenoy,R.Ramakumar, (2015) An Approach to Assess the Resiliency of Electric Power Grids. Journal of Power and Energy Engineering,03,1-13. doi: 10.4236/jpee.2015.311001