# Electrical Grid Stability Prediction

Harshita Pandey Sept 18 2020 · 5 min read

Stable operation of electrical power grid requires balanced supply and demand. But, due to the increasing fraction of renewable energy sources strong fluctuations occur in power generation. Here, the major challenge is to regulate the consumer demand in order  to match the fluctuating power generation. To achieve this goal, Decentral Smart Grid Control (DSGC) concept is introduced which suggests real-time pricing. In this, the consumer demand data is collected, then given the current power supply it is centrally evaluated and sends the price information back to customers for them to decide about usage.

Here, the objective is to build a model that can forecast  the stability of the power grid. The local stability of the four-node star electrical grid with centralized production is analyzed by implementing Decentral Smart Grid Control concept.

Data Source: Electrical Grid Stability Simulated Dataset

##### Attribute Information:
• tau[x]: reaction time of participant (real from the range [0.5,10]s). Tau1 - the value for electricity producer.
• p[x]: nominal power consumed(negative)/produced(positive)(real). For consumers from the range [-0.5,-2]s^-2; p1 = abs(p2 + p3 + p4)
• g[x]: coefficient (gamma) proportional to price elasticity (real from the range [0.05,1]s^-1). g1 - the value for electricity producer.
• stab: the maximal real part of the characteristic equation root (if positive - the system is linearly unstable)(real)
• stabf: the stability label of the system (categorical: stable/unstable)
• Before we dive deeper into the coding part, let us briefly discuss about the popular python libraries which we will be using throughout.

### Python Libraries

Python libraries are a collection of functions and methods that allows us to perform many actions without writing the code.

NumPy: NumPy is a very popular python library for large multi-dimensional array and matrix processing, with the help of a large collection of high-level mathematical functions. It is very useful for fundamental scientific computations in Machine Learning.

Pandas: Pandas is a popular Python library for data analysis. Pandas is developed specifically for data extraction and preparation.

Matplotlib: Matplotlib is a very popular Python library for data visualization. It provides various kinds of graphs and plots for data visualization, viz., histogram, error charts, bar charts, etc.

Scikit-learn: Scikit-learn is one of the most popular ML libraries for classical ML algorithms. Scikit-learn supports most of the supervised and unsupervised learning algorithms. Scikit-learn can also be used for data-mining and data-analysis, which makes it a great tool who is starting out with ML.

Seaborn: Seaborn is a Python data visualization library based on matplotlib. It provides a high-level interface for drawing attractive and informative statistical graphics.

Tensorflow and Keras: TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML powered applications.

Keras is an open-source neural-network library written in Python. It is capable of running on top of TensorFlow, Microsoft Cognitive Toolkit, R, Theano, or PlaidML. Designed to enable fast experimentation with deep neural networks, it focuses on being user-friendly, modular, and extensible.

### Import Libraries

``````import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline``````

# Reading a csv file and returning the first 10 rows of the dataframe.

``df=pd.read_csv("grid_data.csv")df.head(10)``

### Exploratory Data Analysis

Exploratory data analysis (EDA) is an approach to analyzing data sets to summarize their main characteristics, often with visual methods.

# Summary of the DataFrame.

``df.info()``

# Checking the presence of missing values in each column.

``df.isnull().sum()``

# Descriptive statistics.

``df.describe()``

# The counts of observations in each category.

``````print(df['stabf'].value_counts())

sns.set_style('whitegrid')
sns.countplot(x='stabf',data=df, palette='YlGnBu_r')``````

# Distribution of observations in column 'stab'.

``````
plt.figure(figsize=(8,4))
sns.distplot(df['stab'], color='r')``````

Correlation states how the features are related to each other or the target variable. Correlation can be positive (increase in one value of feature increases the value of the target variable) or negative (increase in one value of feature decreases the value of the target variable).

# Correlation Heatmap

``plt.figure(figsize=(14,10))sns.heatmap(df.corr(), annot=True)``

### Train Test Split

It splits the dataset into a training set and a test set. Here, we split the dataset in the ratio 80:20 to create training and testing subsets, respectively.

``````X = df.drop(['stab', 'stabf'],axis=1)
y = df['stab']

from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.2,random_state=42)``````

### Scaling

Feature scaling is a method used to normalize the values of independent variables in a fixed range.

``````import joblib
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train= scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
joblib.dump(scaler, 'Scaler.joblib')``````

['Scaler.joblib']

### Build the Model

``````from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Dropout``````

# Returns the number of dimensions in the array.

``X_train.shape``

(8000, 12)

# Building the Model.

The "exponential linear unit" (ELU) is used as an activation function as it has negative values which allows it to push mean unit activations closer to zero. ELU speeds up learning in deep neural networks and leads to higher classification accuracies.

``````model = Sequential()model.add(Dense(12, activation='elu'))model.add(Dropout(0.5))

Early Stopping is a technique that terminates training when a monitored metric stops improving.

# This callback will stop the training when there is no improvement in the validation loss.

``from tensorflow.keras.callbacks import EarlyStoppingearly_stop = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=10)``

Fraction of the training data is used as validation data. The model will set apart this fraction (10%) of the training data, will not train on it, and will evaluate the loss on this data at the end of each epoch. It selects the first 90% as train data and the remaining 10% as validation data.

``model.fit(x=X_train,y=y_train.values,validation_split=0.1,batch_size=32,epochs=100, callbacks=[early_stop])``

# Save the model.

``model.save('Electrical_Grid_Stability.h5')``
``model.summary()``

Training loss Vs. Validation loss

``losses = pd.DataFrame(model.history.history)losses.plot()``

### Model Evaluation in Regression

``````from sklearn.metrics import mean_squared_error,mean_absolute_error
predictions = model.predict(X_test)``````
``predictions``

There are 3 main metrics for model evaluation in regression:

Mean Absolute Error

``mean_absolute_error(y_test,predictions)``

0.015933133282084053

Mean Squared Error

The mean squared error (MSE) measures the average squared difference between the estimated values and the actual value.

``mean_squared_error(y_test,predictions)``

0.00041859815952952154

Root Mean Square Error (RMSE)

``np.sqrt(mean_squared_error(y_test,predictions))np.sqrt(mean_squared_error(y_test,predictions))``

0.020459671540118175

### Electrical Grid Stability Prediction and Classification

``````from tensorflow.keras.models import load_modelmodel_grid = load_model('Electrical_Grid_Stability.h5')
``````from sklearn.preprocessing import StandardScaler
def foo(t1, t2, t3, t4, p1, p2, p3, p4, g1, g2, g3, g4):
X_test = scaler.transform([[t1, t2, t3, t4, p1, p2, p3, p4, g1, g2, g3, g4]])
prediction = model_grid.predict(X_test)
print(prediction[0][0])
if prediction[0][0]>=0:
return "Oops! the system is linearly unstable."
else:
return "Great! the system is stable."``````

### References

1. Towards Concise Models of Grid Stability

2. Decentral Smart Grid Control

Note: The entire end to end code can be downloaded from, https://github.com/Harshita9511/Electrical-Grid-Stability-Predictor and  the web application can be accessed by the given link - https://electrical-grid-stability.herokuapp.com/