Wavelets and Entropy for Power Quality Assessment ()

Eduardo Antonio Cano-Plata^{}, Armando J. Ustariz-Farfán^{}, Jorge H. Estrada-Estrada^{}

Department of Electrical, Electronic and Computer Engineering, Universidad Nacional de Colombia, Manizales, Colombia.

**DOI: **10.4236/ajcm.2017.73022
PDF HTML XML
739
Downloads
1,179
Views
Citations

Department of Electrical, Electronic and Computer Engineering, Universidad Nacional de Colombia, Manizales, Colombia.

In this
paper, wavelet transform and entropy are evaluated using the mathematical
analysis concepts of reflexibility, regularity and series obtention, these concepts remark
the reason to make a selective reference framework for power quality
applications. With this
idea the paper used the same treatment for the two algorithms (Multiresolution
and Multiscale Entropy). The wavelet is denoted to have the most power full
consistence to the light off the reflexibility, regularity and series
obtention. The paper proposes a power quality technique namely M_{pq}AT.

Keywords

Wavelets, Entropy, Reflexibility, Regularity, Multiresolution, Multiscale

Share and Cite:

Cano-Plata, E. , Ustariz-Farfán, A. and Estrada-Estrada, J. (2017) Wavelets and Entropy for Power Quality Assessment. *American Journal of Computational Mathematics*, **7**, 276-290. doi: 10.4236/ajcm.2017.73022.

1. Introduction

The natural harmony of Fourier’s analysis in the Hilbert space is demonstrated by Riesz-Fischer’s [1] and Plancherel’s [2] theorems. In this harmony, three concepts are summarized: reflexibility, regularity and series obtention.

These three concepts are intended to be shown in two ways, firstly, using wavelet transformation, and secondly, through numerical entropy.

In [1] an approximation of the n-dimensional wavelet transform was shown through heuristic treatment. Following the same methodology, this article aims to show coincidences between the two methods (in wavelets and entropy) highlighting the following three basic concepts: reflexibility, regularity and series obtention. The orientation of these three concepts determines the way that engineers approach definitions for quality concepts.

In particular, power quality allows identifying the health state of a power system by means of applying processes signal techniques to current and voltage waveforms. Therefore, power quality is used as the concept with which the achieved definition will be tested.

2. Wavelet Transform

Definition 1

Function
$\phi \in {L}^{2}\left(R\right)$ is called an orthogonal wavelet, if the family
$\left\{{\phi}_{jk}={2}^{j/2}\phi \left({2}^{j}x-k\right),j,k\in Z\right\}$ is an orthonormal basis of L^{2}(R), this is if

$\langle {\varphi}_{jk},{\varphi}_{em}\rangle ={\delta}_{j\mathcal{l}}{\delta}_{km}=\{\begin{array}{l}0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{si}\text{\hspace{0.17em}}\text{\hspace{0.17em}}j\ne \mathcal{l}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{or}\text{\hspace{0.17em}}\text{\hspace{0.17em}}k\ne m\\ 1\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{si}\text{\hspace{0.17em}}\text{\hspace{0.17em}}j=\mathcal{l}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=m\end{array}$ (1)

So, if φ it is an orthonormal wavelet, and if $f\left(x\right)\in {L}^{\text{2}}\left(R\right)$ consequently the wavelet series is:

$f\left(x\right)={\displaystyle \underset{j,k=-\infty}{\overset{\infty}{\sum}}{C}_{jk}{\varphi}_{jk}\left(x\right)}={\displaystyle \underset{j=-\infty}{\overset{\infty}{\sum}}{\displaystyle \underset{k=-\infty}{\overset{\infty}{\sum}}{C}_{jk}{\varphi}_{jk}\left(x\right)}}$ (2)

Then

${C}_{jk}=\frac{\langle f,{\varphi}_{jk}\rangle}{\langle {\varphi}_{jk},{\varphi}_{jk}\rangle}=\langle f,{\varphi}_{jk}\rangle $ (3)

that is

$\begin{array}{l}{C}_{jk}={\displaystyle {\int}_{-\infty}^{\infty}f\left(x\right)\stackrel{\xaf}{{\varphi}_{jk}}\text{d}x}\\ {C}_{jk}={\displaystyle {\int}_{-\infty}^{\infty}f\left(x\right){2}^{j/2}\varphi \left({2}^{j}x-k\right)\text{d}x}\\ {C}_{jk}={\left({2}^{j}\right)}^{1/2}{\displaystyle {\int}_{-\infty}^{\infty}f\left(x\right)\stackrel{\xaf}{\varphi \left(\frac{x-k/{2}^{j}}{{2}^{-j}}\right)}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{d}x}\\ {C}_{jk}={\left({2}^{-j}\right)}^{-1/2}{\displaystyle {\int}_{-\infty}^{\infty}f\left(x\right)\stackrel{\xaf}{\varphi \left(\frac{x-k/{2}^{j}}{{2}^{-j}}\right)}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{d}x}\end{array}$ (4)

Then: wavelet transform relative to basic wavelet φ, like the function:

$CW{T}_{f}\left(a,b\right)={\left|a\right|}^{-1/2}{\displaystyle {\int}_{-\infty}^{\infty}f\left(x\right)\stackrel{\xaf}{\varphi \left(\frac{x-b}{a}\right)}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{d}x}$ (5)

And thus the C_{jk} parts of the wavelet series are obtained from the wavelet transform for:

$b=\frac{k}{{2}^{j}}\text{,}a=\frac{1}{{2}^{j}}$ (6)

The demonstration can be seen in [3] .

3. Entropy

From the point of view of physical, entropy is the concept that measures the tendency towards disorder in nature. This concept has had an important development in the applications derived from it, for example: evaluation of the efficiency in electric motors or power systems. Philosophically, the concept has been used, given the implications for understanding of natural elements and their interaction with life.

With regard to information, entropy has made information visible as a message, which must generate a link between sender and receiver by means of propagation or transmission (whether physical or abstract). This characterizes the degree of difficulty in nature as the goal of entropy. That is, these difficulties are noise, interruptions, etc.

They are disturbances in the message during transmission, and can represent loss of information as a result of system conditions, conformed by the source, transmitter and receiver.

In this particular case, the tendency to decrease information can be visualized as loss or disorder, and so, it is visualized as a form of entropy [4] .

Information Entropy

Information Entropy is also known as “Shannon’s entropy”. The coding theorem focuses its attention on random behavior of nature, such as disturbing elements or noise [5] .

It is said that an extensive property is one that we can define through the analysis of systems composed by other subsystems; the properties of large systems require varying slopes.

Entropy is an extensive property. The information contained in two information channels should equal the sum of the information carried by the two channels individually [6] .

Entropy is defined as a measure of uncertainty for a random variable [5] .

Shannon’s entropy H(X) is defined as:

${H}_{b}\left(X\right)=-{\displaystyle \underset{x\in \Theta}{\sum}p\left({x}_{i}\right){\mathrm{log}}_{b}p\left({x}_{i}\right)}$ (7)

where X represents the random variable with $\Theta $ set of values, and probability density function $p\left({x}_{i}\right)=P\left\{X={x}_{i}\right\}$ , ${x}_{i}\in \Theta $ . The equation is generally calculated in binary logarithm. In this case, entropy is expressed in (for example, the entropy of throwing a die is 0.1870 bits). Note that $\u2013p\mathrm{log}\left(p\right)\ge 0$ because $0\le p\le 1$ , therefore, entropy is strictly positive, as observed in reference [4] . If we change the base of the Neperian logarithm i.e.: e, the entropy is measured in nats [7] .

For a time series representing the output of a stochastic process, which is an ordered sequence of n random variables $\left\{{X}_{i}\right\}=\left\{{X}_{1},\cdots ,{X}_{n}\right\}$ , with a set of values ${\Theta}_{1},\cdots ,{\Theta}_{n}$ respectively, n- dimensional entropy is defined as:

${H}_{n}=-{\displaystyle \underset{{x}_{1}\in {\Theta}_{1}}{\sum}\cdots {\displaystyle \underset{{x}_{n}\in {\Theta}_{n}}{\sum}p\left({x}_{1},\cdots ,{x}_{n}\right)\mathrm{log}p\left({x}_{1},\cdots ,{x}_{n}\right)}}$ (8)

where $p\left({x}_{1},\cdots ,{x}_{n}\right)$ is the joint probability for n variables ${X}_{1},\cdots ,{X}_{n}$ .

The state of a system at a certain moment X_{n}, is partly determined by its history, X_{n},
${X}_{2},\cdots ,{X}_{n-1}$ . However, each new state of the system brings a certain amount of new information with it.

4. Coexistence of Reflexibility

Theorem 1: Reflexibility:

Let E be a Hilbert space, with E' as its dual. Denoted by the duality between E’, E and ${\Vert \text{\hspace{0.05em}}.\text{\hspace{0.05em}}\Vert}^{\ast}$ the dual norm of ${\Vert \text{\hspace{0.05em}}.\text{\hspace{0.05em}}\Vert}_{E}$ , then Riesz’s theorem, better known as the concept of reflexibility, says:

If f Î E' a unique element, u_{f} Î E, exists, such that:

Part A:

$\{\begin{array}{l}\langle f,v\rangle ={\left(v,{u}_{f}\right)}_{E}\text{}\forall v\in E\\ {\Vert f\Vert}_{*}={\Vert {u}_{f}\Vert}_{E}\end{array}$ (9)

Similarly, each element of u Î E defines an element of f_{n} Î E' such that:

Part B:

$\{\begin{array}{l}\langle {f}_{u},v\rangle ={\left(v,u\right)}_{E}\text{}\forall v\in E\\ {\Vert {f}_{u}\Vert}_{*}={\Vert u\Vert}_{E}\end{array}$ (10)

4.1. The Theorem for Wavelet Transform

This is constituted by the following definition:

Part A:

$CW{T}_{f}\left(a,b\right)={\left|a\right|}^{-1/2}{\displaystyle {\int}_{-\infty}^{\infty}f\left(x\right)\stackrel{\xaf}{\varphi \left(\frac{x-b}{a}\right)}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{d}x}$ (11)

Part B:

Involves demonstrating the existence of the inverse transform [2] .

4.2. The Theorem for Information Entropy

This is constituted by definition:

${H}_{b}\left(X\right)=-{\displaystyle \underset{x\in \Theta}{\sum}p\left({x}_{i}\right){\mathrm{log}}_{b}p\left({x}_{i}\right)}$ (12)

But the sample is not recoverable; it cannot be obtained its inverse transformation, and there is no recoverable application.

5. Characterization of Regularity

The Parseval theorem and the central limit: by definition, regularity indicates the variation of a number, with respect to its mean.

5.1. The Parseval Identity as It Relates to Wavelet Transform

In wave transformation, it is used to characterize the regularity of f in L^{2}, as measured by the Sobolev norm, which indicates the Parseval identity.

In other words:

Applying the Parseval identity:

$\begin{array}{l}{\displaystyle \underset{-\infty}{\overset{\infty}{\int}}{\displaystyle \underset{-\infty}{\overset{\infty}{\int}}{\left|CW{T}_{f}\left(a,b\right)\right|}^{2}\cdot \frac{\text{d}a\text{d}b}{{a}^{2}}}}={\displaystyle \underset{-\infty}{\overset{\infty}{\int}}\left(\frac{1}{\text{2\pi}}{\displaystyle \underset{-\infty}{\overset{\infty}{\int}}{\left|P\left(\omega \right)\right|}^{2}}\text{d}\omega \right)\frac{\text{d}a}{\left|a\right|}}\\ {\displaystyle \underset{-\infty}{\overset{\infty}{\int}}\left(\frac{1}{2\text{\pi}}{\displaystyle \underset{-\infty}{\overset{\infty}{\int}}{\left|{\Psi}^{\ast}\left(a\omega \right)\right|}^{2}}{\left|F\left(\omega \right)\right|}^{2}\text{d}\omega \right)\frac{\text{d}a}{\left|a\right|}}=\frac{1}{2\text{\pi}}{\displaystyle \underset{-\infty}{\overset{\infty}{\int}}{\left|F\left(\omega \right)\right|}^{2}{\displaystyle \underset{-\infty}{\overset{\infty}{\int}}\frac{{\left|\Psi \left(a\omega \right)\right|}^{2}}{\left|a\right|}\text{d}a\text{d}\omega}}\end{array}$ (13)

NOTE: The change in integration is performed in accordance with Fubini, and the second integral on the right side is C_{ψ}.

Applying the Parseval equality again:

$\frac{1}{{C}_{\psi}}{\displaystyle \underset{-\infty}{\overset{\infty}{\int}}{\left|CW{T}_{f}\left(a\omega \right)\right|}^{2}\frac{\text{d}a\text{d}b}{\left|a\right|}}=\frac{1}{{C}_{\psi}}\frac{{C}_{\psi}}{\text{2\pi}}{\displaystyle \underset{-\infty}{\overset{\infty}{\int}}{\left|F\left(\omega \right)\right|}^{2}\text{d}\omega}={\displaystyle \underset{-\infty}{\overset{\infty}{\int}}{\left|f\left(t\right)\right|}^{2}\text{d}t}$ (14)

With the amount of energy from the signal as a function of the energy is decomposed in each element or component in frequency bands (wavelets).

5.2. Regularity Basis of Entropy

The definition of entropy comes from the central limit theorem, that is:

Suppose that m is a measure of probability in the data from the real signal,

We have:

$\int xm\left(\text{d}x\right)}=0\text{and}{\displaystyle \int {x}^{2}m\text{d}x}={\sigma}^{2$ (15)

Then, for any interval of A:

$\underset{n}{\mathrm{lim}}\left(m\ast m\ast m\ast \cdots \ast m\right)\left(A\sqrt{n}\right)=\frac{1}{\sqrt{2\text{\pi}{\sigma}^{2}}}{\displaystyle \int {\text{e}}^{\frac{-{x}^{2}}{{\sigma}^{2}}}\text{d}x}$ (16)

Convergence in (16) shows the definition of entropy.

From this definition comes:

5.3. Approximate Entropy Algorithms ApEn and Etropy Sampler (SamPEn)

Derived from Shannon’s work, Pincus [8] proposed the approximate entropy algorithm (Approximate Entropy) ApE_{n}, which measures regularity from the mathematical analysis [9] point of view.

ApE_{n} algorithm description:

Given an N sample, time series
${X}_{N}=\left\{{x}_{1},\cdots ,{x}_{i},\cdots ,{x}_{N}\right\}$ , two input parameters m and r, must be incorporated. These belong to parameters from the correlation dimension postulated by Grassberger and Procaccia [10] . Parameter m corresponds to the length of vectors u_{m}(i), generated from the data, and which correspond to the number of samples in the series. Parameter r is the tolerance, which is the distance to be defined, which evaluates the points immediately next to a reference point.

According to length value m, vectors ${u}_{m}\left(1\right),\cdots ,{u}_{m}\left(N-m+1\right)$ are created, where each vector is expressed as ${u}_{m}\left(i\right)=\left[u\left(i\right),u\left(i+1\right),\cdots ,u\left(i+m-1\right)\right]$ . These vectors represent m consecutive values of time series x, starting with the first event-tracking element, as shown in Figure 1.

The distance between vectors u_{m}(i) and u_{m}(j) is defined as the maximum of the absolute value of the difference between vector components:

$d\left[{u}_{m}\left(i\right),{u}_{m}\left(j\right)\right]\le \underset{k=1,\cdots ,m}{\mathrm{max}}\left(\left|u\left(i+k\right)-u\left(j+k\right)\right|\right)$ (17)

If
${C}_{i}^{m}\left(r\right)$ it is the probability that vector u_{m}(j) is close to vector u_{m}(i), i.e. the number of j(
$1\le j\le N-m+1$ ) such that
$d\left[{u}_{m}\left(i\right),{u}_{m}\left(j\right)\right]\le r$ divided by the

Figure 1. Calculation for m = 2.

number of vectors extracted from the time series. For ( $1\le i\le N-m+1$ ), the probability of being within the range is given by:

${C}_{i}^{m}\left(r\right)=\frac{\left(\text{Numberof}j\text{suchthat}d\left[{u}_{m}\left(i\right),{u}_{m}\left(j\right)\right]\le r\right)}{N-m+1}$ (18)

Each element of ${C}_{i}^{m}\left(r\right)$ then measures the regularity, or frequency, of similar values, within length m with r tolerance [11] .

${C}_{i}^{m}\left(r\right)$ is constructed by (19):

${C}^{m}\left(r\right)=\frac{1}{N-m+1}{\displaystyle \underset{i=1}{\overset{N-m+1}{\sum}}{C}_{i}^{m}\left(r\right)}$ (19)

${\Phi}^{m}\left(r\right)$ is defined as log that of each ${C}_{i}^{m}\left(r\right)$ element average of i, and is expressed as follows:

${\Phi}^{m}\left(r\right)=\frac{1}{N-m+1}{\displaystyle \underset{i=1}{\overset{N-m+1}{\sum}}\mathrm{log}{C}_{i}^{m}\left(r\right)}$ (20)

Therefore, ApE_{n} is estimated as follows:

$Ap{E}_{n}\left(m,r,N\right)={\Phi}^{m}\left(r\right)-{\Phi}^{m+1}\left(r\right)$ (21)

Sample Entropy (SamPEn)

An improvement to the
$Ap{E}_{n}\left(m,r,N\right)$ algorithm was presented by Richman and Moorman [12] . This algorithm was called Sample Entropy SampE_{n}, which has the advantage of being less dependent on time series size. Thus:

${U}^{m}\left(r\right)=\frac{1}{N-m}{\displaystyle \underset{i=1}{\overset{N-m}{\sum}}{C}_{i}^{m}\left(r\right)}$ (22)

${U}^{m+1}\left(r\right)=\frac{1}{N-m}{\displaystyle \underset{i=1}{\overset{N-m}{\sum}}{C}_{i}^{m+1}\left(r\right)}$ (23)

Equations ((22) and (23)) define vector SampE_{n} elements using the number of pairs u_{m}(i), u_{m}(j) that comply with parameter r, so long as
$d\left[{u}_{m}\left(i\right),{u}_{m}\left(j\right)\right]\le r$ . That said, i ¹ j, and so the pairing of a vector with itself is not taken into account.

Richman and Moorman defined the sample entropy as:

$Samp{E}_{n}\left(m,r\right)=\underset{N\to \infty}{\mathrm{lim}}-\mathrm{ln}\frac{{U}^{m+1}\left(r\right)}{{U}^{m}\left(r\right)}$ (24)

Which is estimated in statistics [13] as:

$Samp{E}_{n}\left(m,r,N\right)=-\mathrm{ln}\frac{{U}^{m+1}\left(r\right)}{{U}^{m}\left(r\right)}$ (25)

6. Approach of the Series

With the definition of regularity, it is necessary to introduce an approach to series from a signal to show the components that can be disaggregated and have the same degree of regularity of these components.

6.1. Axiomatic Definition of Multiresolution Using Wavelets

An intuitive idea for the division of the spectrum by series of discrete waves, using filters is represented in Figure 2.

Definition 2 [2]

A multiresolution structure is a sequence of subspaces {V_{j}} in L^{2}(R), such that:

$\begin{array}{l}1)\text{\hspace{0.17em}}\text{\hspace{0.17em}}{V}_{j+1}\subseteq {V}_{j}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\forall j>0\\ 2)\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\displaystyle {\cup}_{j}{V}_{j}}\text{\hspace{0.17em}}\text{Itisdensein}\text{\hspace{0.17em}}{L}_{2}\left(R\right)\\ 3)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{}{\displaystyle {\cap}_{j}{V}_{j}}=\left\{0\right\}\\ 4)\text{\hspace{0.17em}}\text{\hspace{0.17em}}{V}_{j}={V}_{j+1}\oplus {W}_{j+1}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\forall j>0\\ 5)\text{\hspace{0.17em}}\text{\hspace{0.17em}}f\left(t\right)\in {V}_{j+1}\iff f\left(2t\right)\in {V}_{j}\end{array}$ (26)

The symbol Å should be interpreted as the orthogonal sum of two subspaces.

From Figure 2, one can observe that W_{j} + 1 is the orthogonal complement of V_{j} + 1 in V_{j}. W_{j} is the subspace of a band limited to the
$\left[{2}^{-j},{2}^{-j+1}\right]$ interval, and the orthonormal basis for this subspace is the function {ψ_{n}_{,m}(t)}.

Theorem 2: any succession of spaces satisfying the five equations in definition (26), shows that there is an orthonormal basis for L^{2}(R) such that:

${\psi}_{m,n}\left(t\right)={2}^{-m/2}\psi \left({2}^{-m}t-n\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}m,n\in Z$ (27)

Figure 2. The spectrum is symmetrical in the vicinity of zero. The division into V_{i} spaces (V_{i} Ì V_{i}_{-1}), results in W_{i} spaces. (The width of V_{J} and W_{J} is 2^{j}^{/2} they have a unitary norm).

Beginning with: {ψ_{m}_{,n}}, nÎZ an orthonormal basis for W_{m}, where W_{m} is the orthogonal complement of V_{m} in V_{m} − 1. A demonstration can be seen in [1] .

By virtue of the previous theorem, the simple choice of a_{0} = 2 and b_{0} = 1 generates an orthonormal basis of functions.

Observation: From the multiresolution analysis, we have, therefore, two spaces and for each of them, a set of generating functions.

As V_{1} Í V_{0} and W_{1} Í V_{0}, the functions of these subspaces are boundaries (in L_{2}) FOR linear combinations of the base function V_{0}. There is a sequence {v(k)}, such that:

$\varphi \left(t\right)={\displaystyle \underset{k}{\sum}v\left(k\right)\varphi \left(2t-k\right)}\text{}\left(\text{ratiooftwoscales}\right)$ (28)

$\varphi \left(t\right)={2}^{1/2}{\displaystyle \underset{k}{\sum}v\left(k\right)1/2\mathrm{sin}c\left(2\text{\pi}\left(t-k/2\right)/2\right)}$ (29)

Reorganizing internal parentheses:

$\varphi \left(t\right)={2}^{1/2}{\displaystyle \underset{k}{\sum}v\left(k\right)1/2\mathrm{sin}c\left[2\text{\pi}\left(1/2\right)\left(t-k/2\right)\right]}$ (30)

In accordance with the sampling theorem results:

$v\left(k\right)={2}^{1/2}\varphi \left(k/2\right)=\mathrm{sin}\left(\text{\pi}k/4\right)/\text{\pi}k$ (31)

And since f(t) satisfies the equation between two scales, it is called scale function.

In the same way, {w(k)} is a sequence such that:

$\psi \left(t\right)={\displaystyle \underset{k}{\sum}w\left(k\right)\phi \left(2t-k\right){2}^{1/2}}$ (32)

Resulting in:

$w\left(k\right)={2}^{1/2}\psi \left(k/2\right)={2}^{1/2}\left(2\phi \left(k\right)-\phi \left(k/2\right)\right)$ (33)

The following relationships result from orthogonality:

$\langle \phi \left(t\right),\phi \left(t-m\right)\rangle =\delta \left(m\right)$ (34)

$\langle \psi \left(t\right),\psi \left(t-m\right)\rangle =\delta \left(m\right)$ (35)

$\langle \phi \left(t\right),\psi \left(t-m\right)\rangle =0$ (36)

where δ(m) is a generalized function or a Dirac delta [1] . The internal product between the functions is symbolized by á,ñ.

The V spaces are generated by scale functions f(t), and similarly, W spaces are generated by wavelet functions ψ(k).

In other words, wavelet functions and scale functions are used as blocks on which to construct or decompose the signal at different levels of resolution. Wavelet functions will generate different versions of details of the composite signal and the scale function will generate the approximate version of the signal, object of the decomposition. This can be mathematically represented by the following equation:

$f\left(t\right)={\displaystyle \underset{k}{\sum}c\left(k\right)\phi \left(t-k\right)}+{\displaystyle \underset{k}{\sum}{\displaystyle \underset{j=0}{\overset{J-1}{\sum}}{d}_{j}\left(k\right){2}^{j/2}\psi \left({2}^{j}t-k\right)}}$ (37)

where, c is the coefficient of the scale, d_{j} is the coefficient of the wavelet in scale j, f(t) and ψ(t) are the functions scale and wavelet, respectively, and k is the coefficient of translation.

Partial conclusion: This main result proposed by French mathematician Yves F. Meyer was the core for posterior (section VII) assessment in power quality. Equation (37) has all three central elements proposed in this article: reflexivity, regularity, and it is a series.

6.2. Multiscale Entropy (MSE)

With algorithms ApE_{n} [14] and SampE_{n}, the loss of regularity in the time series is measured. Madalena et al. [13] have proposed taking into account a reconstitution of the time series on scales. With this, they have managed to increase the classification level of the pathologies that they study. This decomposition of the series is known as the Multi-scale Entropy (MSE) algorithm. The decomposition process is shown in Figure 3 and is described below.

Description:

From succession
$\left\{{x}_{1},\cdots ,{x}_{i},\cdots ,{x}_{N}\right\}$ , a new series y^{(}^{τ}^{)} emerges, whose terms are the average of the consecutive elements of the original series, without overlapping. τ corresponds to a scale factor. Each element generated in the time series is calculated by Equation (38):

${y}_{j}^{\tau}=\frac{1}{\tau}{\displaystyle \underset{i=\left(j-1\right)\tau +1}{\overset{j\tau}{\sum}}{x}_{i}},\text{}1\le j\le \frac{N}{\tau}$ (38)

For scale one (τ = 1), time series y^{(}^{1)} is simply the original time series. The length of each new time series generated is equal to the length of the original series, divided by factor τ.

Finally, each new time series represents a new τ (factor scale function), which is processed by the SampE_{n}, thus obtaining the entropy of the signal at multiple scales, or MSE.

7. Applications for Power Quality

The three characteristics cited, reflexivity, regularity, and series, are indispensa-

Figure 3. Two series construction from entropy. Adapted from reference [13] .

ble properties for the design of a quality indicator.

Quality itself can be classified as the valuation that is given to a physical object that comes from the production process of another such object. This representation shows its own degree of excellence. For this, it is necessary to have an instrument which allows measurement of a signal from the physical object to be evaluated. This signal must be able to be placed in a comparative framework, where it is demonstrated that the degree of deviation from a reference is measurable. This deviation speaks to its degree of quality.

7.1. Definition 3―Measurement of Excellence

It is possible to normalize the workspace with a scalar type value, this will be a representation of the degree of excellence of a measured point versus its reference value, it is a normalized value, and since the signal analysis is equivalent to the signal noise ratio.

7.2. Definition 4―The Quality Index

$QI=\sqrt{\frac{{\displaystyle \sum {x}_{i}^{2}}}{{x}_{r}^{2}}}$ (39)

In Equation (39), x_{i} indicates the components that are deviated from reference x_{r}.

In this way, the definition of the quality index is reached. You can use continuous parameter or discrete parameter space {L^{n}, l^{n}}. As it is a work that can be carried to the transformed frame, the measure and integral within will be defined according to Lebesgue [2] [15] and the analysis can be performed in discrete space.

7.3. Representation of the Quality Index from the Wavelet Transform

$QI=\sqrt{\frac{{\displaystyle \underset{k}{\sum}{\displaystyle \underset{j=0}{\overset{J-1}{\sum}}{\left({d}_{j}\left(k\right){2}^{j/2}\psi \left({2}^{j}t-k\right)\right)}^{2}}}}{{\displaystyle \underset{k}{\sum}c\left(k\right)\phi {\left(t-k\right)}^{2}}}}$ (40)

With this definition, the level of deviation of the detailed energy values, with respect to energy values from the thick part of the signal, or low frequency, is measured.

7.4. Representation of the Quality Index from the Theory of Entropy

$QI=\sqrt{\frac{{\displaystyle \underset{k}{\sum}{\displaystyle \underset{j=1}{\overset{J}{\sum}}{\left(MS{C}_{j}\right)}^{2}}}}{{\left(MSC\right)}_{k}^{2}}}$ (41)

where MSC is multiscale entropy, k is the scale where the energy of major importance is concentrated against the rest. This will be the level of quality deviation.

8. Framework for Evaluation―Modified pqAT Technique (M_{pq}AT)

Much work has been done on the classification and characterization of disturbances [16] - [26] .

Here a modification to the power quality Analysis Technique-pqAT [27] is made. This is an algorithm whose objective is the characterization and classification of disturbances in the electrical system. This new technique, M_{pq}AT, is a previous step to quality maps [28] .

The signal analysis method starts from the definition of the instantaneous power tensor, and the transformation is then performed on the frame of the transformed wavelet [1] . There, the parameters of active, reactive, and disturbance power are determined.

Figure 4 shows the structure of the technique for power measurement, and

Figure 4. Structure in MpqAT block diagram.

the classification of events in power systems.

In Figure 4, we see seven blocks, divided into three structures. The structure represented by the blocks of thick and continuous trace, attempts to characterize transient phenomena in the transformed plane. Then, a dotted structure is shown, that is basically an inference engine. This block identifies rules for identification of the type of phenomenon that has been registered. Two blocks are shown, with continuous but tenuous trace, where the calculation of an indicator of deviation of quality or error is performed. A description of each of the blocks is given below.

1) Block 1 measures voltage, and current is measured by an instrument or by a SCADA system (as is done in some systems at present) [24] [25] [26] [27] for the monitoring of various operating points in the system. The information obtained proceeds to Block 2, where the voltage and current are transformed in the frame of the power tensor, and then the multiresolution or multiscale algorithm will be applied to each signal, depending on the case.

2) In Block 3, the quality index is calculated.

3) Block 4 represents the database, with which planning and operation of the monitored system can be achieved. This is used to feed the inference engine, which is the block indicated by the number four. It defines the premises upon which the load identification and classification of transient events are made. This block, called the system database, also feeds the calibration block.

4) Block 5: This block examines events and has to do with a decrease in voltage value. The main events characterized here are:

・ Line energization.

・ Motor ignition.

・ Capacitor bank start-up.

・ In the block, there may be a classification of voltages, due to errors, which shows all aspects of error characterization (possibly followed by the performance of the protection system).

・ These premises, accompanied by the inference engine, produce results presented by the classification block.

5) Block 6: In the calibration block, two parameters are set, on which the entire multiresolution analysis depends, making the method entirely dependent on them. These parameters are: the sampling frequency and the number of decomposition levels (in the case of wavelets or signal multiscale in the case of entropy). In the case of any number of levels, because it is a dyadic decomposition (division of the frequency axis into octaves), the signal size will also be limited to a multiple number of samples, which agrees with most instruments that use the FFT.

The most current literature shows significant advances in the treatment of information from the point of view of classification techniques, using waveform transform. From pioneering studies [27] to the results presented in [16] [17] [18] [19] , two trends are observed: the first is analysis, and the second classification. With the MpqAT technique, an attempt is made to unify these two criteria, and give unity to the way to determine three typical effects of electromagnetic phenomenon in three-phase systems: imbalance, harmonics and transients.

9. Conclusions

Two traditionally used techniques have been compared in signal analysis, entropy and wavelets. The comparison has focused on three criteria: reflexivity, regularity, and series construction.

The article showed that for the case of wavelet theory, these three criteria are perfectly fulfilled. In the case of entropy, the concept of a series in an “artificial” signal shape is introduced, but the case of reflexibility is not fulfilled. Consequently, entropy is a valuable tool for regularity measurement only.

Additionally:

・ Quality has been defined from two main points of view, as a series of attributes of a physical object, and a degree of excellence that must be qualified according to that set of attributes.

- The first part of the definition involves decomposing the attribute into a measurable series using the property of regularity. The second proposes the idea of quantifying and the degree of excellence through definition of quality indexes. This is based on those conservative-type parameters that are determined through energy definitions in the transformed frame―Parseval’s theorem.

- Finally, any technique that exhibits decomposition in reflexivity, regularity, or series is a candidate for use as quality evaluation framework.

This article will close with a proposal to evaluate power quality using the structure of an expert system, dedicated to the measurement and classification of perturbations, a system called M_{pq}AT. The novelty of this technique is that, through use of the same structure, analysis of both transient and stationary perturbation in any type of frame of reference is unified. System topology has been considered in the most general way possible, and is based on the results obtained by the series criteria, regularity, and reflexivity.

Acknowledgements

The authors would like give the thanks to the power and distribution network group (GREDyP) in the Universidad Nacional de Colombia, Manizales Branch.

Conflicts of Interest

The authors declare no conflicts of interest.

[1] |
Cano-Plata, E.A., Ustariz-Farfán, A.J., Diaz-Cadavid, L.F. (2012) Power Tensor Theory and Continuous Wavelet Transform. Journal of American Journal of Computational Mathematics, 2, 130-135. https://doi.org/10.4236/ajcm.2012.22018 |

[2] | Pinsky, M.A. (2010) Introduction to the analysis of fourier and ondoletas. Thomson, Mexico City. |

[3] |
Dautray, R. and Lions, J.-L. (1988) Mathematical Analysis and Numerical Methods for Science and Technology. Vol. 2 Functional and Variational Methods. Springer-Verlag, Berlin. https://doi.org/10.1007/978-3-642-61566-5 |

[4] | Bertoglio, O.A.J. and Johansen, O. (1982) Introduccion A La Teoria General De Sistemas. [Introduction to the General Theory of Systems.] Editorial Limusa S.A. De C.V., México. |

[5] |
Gray, R.M. (2011) Entropy and Information Theory. Springer US, New York. https://doi.org/10.1007/978-1-4419-7970-4 |

[6] |
Gros, C. (2011) Complex and Adaptive Dynamical Systems: A Primer. Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-04706-0 |

[7] | Cover, T.M. and Thomas, J.A. (2006) Elements of Information Theory. Wiley, New York. |

[8] |
Pincus, S.M. (1991) Approximate Entropy: A Complexity Measure for Biological Time Series Data. Proceedings of the 1991 IEEE Seventeenth Annual Northeast, Hartford, 4-5 April 1991, 35-36. https://doi.org/10.1109/NEBC.1991.154568 |

[9] | Pincus, S.M. and Goldberger, A.L. (1994) Physiological Time-Series Analysis: What Does Regularity Quantify? American Journal of Physiology—Heart and Circulatory Physiology, 266, H1643-H1656. |

[10] |
Grassberger, P. and Procaccia, I. (1983) Measuring the Strangeness of Strange Attractors. Physica D: Nonlinear Phenomena, 9, 189-208. https://doi.org/10.1016/0167-2789(83)90298-1 |

[11] | Mendoza, J., Morles, E.C. and Chacón, E. (2011) La entropía aproximada como una nueva metodología para la detección de eventos dentro de un sistema dinámico híbrido. Cienc. e Ing., 31-42. |

[12] | Richman, J.S. and Moorman, J.R. (2000) Physiological Time-Series Analysis Using Approximate Entropy and Sample Entropy. American Journal of Physiology— Heart and Circulatory Physiology, 278, H2039-H2049. |

[13] |
Costa, M., Goldberger, A. and Peng, C.-K. (2002) Multiscale Entropy Analysis of Complex Physiologic Time Series. Physical Review Letters, 89, Article ID: 068102. https://doi.org/10.1103/PhysRevLett.89.068102 |

[14] | Pincus, S. (1995) Approximate Entropy (ApEn) as a Complexity Measure. Chaos an Interdiscip. Journal of Nonlinear Science, 5, 110-117. |

[15] | Rudin, W. (1973) Functional Analysis. 2nd Edition, Mcgraw-Hill, New York. |

[16] | Ferrero, A. (1998) Definitions of Electrical Quantities Commonly Used in Non-Sinusoidal Conditions. International Transactions on Electrical Energy Systems, 8, 235-240. |

[17] |
Tugulea, A. (1996) Criteria for the Definition of the Electric Power Quality and Its Measurement Systems. International Transactions on Electrical Energy Systems, 6, 357-363. https://doi.org/10.1002/etep.4450060518 |

[18] | Khalil, H.K. (1996) Nonlinear Systems. 2nd Edition, Prentice-Hall, New York. |

[19] | D’Attellis, C.E. (1988) “Teoría Distribucional de Sistemas” Cursos y seminarios de matemáticas—Fascículo 34. Universidad de Buenos Aires, Buenos Aires. |

[20] | Styvaktakis, E. (2000) On Feature Extraction of Voltage Disturbance Signals. Technical Report No. 340L, Department of Signals and Systems, School of Electrical and Computer Engineering, Chalmers University of Technology, Sweden. |

[21] |
Santoso, S., Powers, E., Grady, W.M. and Parsons, A.C. (2000) Power Quality Disturbance Waveform Recognition Using Wavelet-Based Neural Classifier—Part 1: Theoretical Foundation. IEEE Transactions on Power Delivery, 15, 222-228. https://doi.org/10.1109/61.847255 |

[22] |
Santoso, S., Powers, E., Grady, W.M. and Parsons, A.C. (2000) Power Quality Disturbance Waveform Recognition Using Wavelet-Based Neural Classifier—Part 2: Application. IEEE Transactions on Power Delivery, 15, 229-234. https://doi.org/10.1109/61.847256 |

[23] |
Huang, S. and Hsieh, C. (1999) High-Impedance Fault Detection Utilizing a Morlet Wavelet Transform Approach. IEEE Transactions on Power Delivery, 14, 1401-1410. https://doi.org/10.1109/61.796234 |

[24] |
Heydt, G.T., Fjeld, P.S., Liu, C.C., Pierce, D., Tu, L. and Hensley, G. (1999) Applications of the Windowed FFT to Electric Power Quality Assessment. IEEE Transactions on Power Delivery, 14, 1411-1416. https://doi.org/10.1109/61.796235 |

[25] | Estrada, J., Cano-Plata, E., Younes-Velosa, C. and Cortés, C. (2011) Entropy and Coefficient of Variation (CV) as Tools for Assessing Power Quality. Ingeniería e Investigación, 31. |

[26] |
Cano Plata, E.A. and Tacca, H.E. (2005) Electric Power Definition in the Wavelet Domain. International Journal of Wavelets, Multiresolution and Information Processing, 3, 573-585. https://doi.org/10.1142/S0219691305001032 |

[27] |
Cano Plata, E.A. and Tacca, H.E. (2005) Power Load Identification. Journal of the Franklin Istitute, 342, 99-113. https://doi.org/10.1016/j.jfranklin.2004.08.006 |

[28] | Ustariz-Farfán, A.J., Cano-Plata, E.A., Tacca, H. and Arango-Lemoine, C. (2012) Visualizing Two- and Three-Dimensional Maps for Power Quality Loss Assessment. Proceedings of the 2012 IEEE 15th International Conference on Harmonics and Quality of Power (ICHQP), Hong Kong, 17-20 June 2012, 909-914. |

Journals Menu

Contact us

customer@scirp.org | |

+86 18163351462(WhatsApp) | |

1655362766 | |

Paper Publishing WeChat |

Copyright © 2022 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.