Power Quality Data Compression Based on Iterative PCA Algorithm in Smart Distribution Systems ()

Ming Zhang^{1*}, Yiming Zhan^{1}, Shunfan He^{2}

^{1}School of Electronic and Electrical Engineering, Wuhan Textile University, Wuhan, China.

^{2}College of Computer Science, South Central University for Nationalities, Wuhan, China.

**DOI: **10.4236/sgre.2017.812024
PDF
HTML XML
883
Downloads
1,885
Views
Citations

To reduce the stress of data transmission and storage for power quality (PQ) in smart distribution systems and help PQ analysis, a multichannel data compression based on iterative PCA (principal component analysis) algorithm is introduced. The proposed method uses PCA to reduce the redundancy of data to achieve the purpose of compressing data. In order to improve the calculating speed, an iterative method is proposed to compute the principal components of the covariance matrix. The correctness and feasibility of the proposed method are verified by field PQ data tests. Compared with discrete wavelet transform (DWT) method, the proposed method has good performance on compression ratio and reconstruction accuracy.

Keywords

Smart Distribution Systems, Power Quality, Data Compression, Principal Component Analysis (PCA)

Share and Cite:

Zhang, M. , Zhan, Y. and He, S. (2017) Power Quality Data Compression Based on Iterative PCA Algorithm in Smart Distribution Systems. *Smart Grid and Renewable Energy*, **8**, 366-378. doi: 10.4236/sgre.2017.812024.

1. Introduction

As the growing demand for smart distribution systems, more and more power quality (PQ) monitors are especially needed for the power systems with distributed power sources and impulsive and sensitive loads [1] [2] [3] . In power systems, short circuit fault, capacitor switching device for power factor compensation, power electronic device for special loads etc. bring various PQ disturbances (PQD). With the deployment of a large number of PQ monitors, a big volume of data will be produced for smart distribution systems. And it becomes essential to compress this volume so that the data sets can be transmitted and stored promptly and efficiently. Hence, PQ data compression calls more concerns than ever before [4] .

The main goal of any compression method is to achieve maximum data reduction and preserving morphology features upon reconstruction. Data compression is categorized into methods on lossless and lossy techniques. Lossless methods can obtain an exact reconstruction of the original signal, but high compression ratio (CR) cannot be obtained. In contrast, lossy methods do not obtain an exacter construction, but higher CR can be achieved. Consequently, the commonly used PQ data compression methods are lossy in nature [5] [6] .

In general, the data compression scheme for PQ data consists of three steps: signal transform, quantization and encoding. With respect to quantization and encoding, the researchers are more concerned with signal transform. Literature [7] firstly started the PQD data compression by wavelet transform (WT). A threshold was set to eliminate small wavelet coefficients. Improved wavelet-based methods with different threshold settings were also used in this field [8] [9] [10] . Currently a popular PQD data compression is the quantization method [11] [12] . These methods separate fundamental component and disturbance components in power signals. The fundamental component and harmonic components are compressed in the data form of amplitude, frequency and phase angle by Fourier transform (FT). While other PQD components, if there are, are compressed by WT. The key of the quantization methods is how to distinguish the PQD components and separate them with little distortion to fundamental and harmonic components. Since periodical PQ signal is compressed by FT, and the signal compressed by WT is with much less low frequency components, the quantization methods have much higher CR than traditional methods. Another interesting PQ data compression method is developed based on singular value decomposition (SVD) [13] . The PQ data is transformed into singular value matrix that contains nonzero singular values by SVD so that the data can be compressed. The method is also exploring different takeoffs between data CR and loss of information. However, the SVD algorithm is easily affected by the outliers and noises in the data.

Most of the works found in the literatures show PQ data compression in stand-alone power systems. However, data compression in smart distribution systems should be considered more for the distributed control applications. Moreover, there is a lack of research works on the data compression in smart distribution systems. This paper presents a method based on iterative principal component analysis (IPCA) algorithm for PQ data compression in smart distribution systems, which can compress multichannel data simultaneously. Here the measured data can be conveniently stored in a matrix format, which is suitable for the application of the principal component analysis (PCA) algorithm. PCA is especially useful for complex data analysis, such as face recognition and data compression [14] [15] [16] . By employing PCA, good takeoffs between data CR and loss of information can be achieved. Because the size of data matrix is generally large, the time-consuming of the traditional PCA is very large. In order to improve the calculating speed, an iterative method is proposed to compute the principal components of the covariance matrix.

The remainder of this paper is organized as follows. The PCA algorithm is introduced in Part 2; Part 3 presents the proposed method; Part 4 provides field PQ data to test and compare the proposed method with related works; Part 5 summarizes the whole work.

2. PCA Algorithm

PCA algorithm was invented in 1901 by Karl Pearson, and it uses an orthogonal transformation to convert a set of measurements of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components. The main advantage of PCA can reduce the dimension and redundancy of data. The eigenvalues and eigenvectors are obtained by decomposing the covariance matrix of data. Then the eigenvectors corresponding to several larger eigenvalues are found as the principal components, and the projection of the measured data to the principal components is carried out to represent the original data, so as to achieve the reduction of dimension and redundancy of data.

Suppose that a sample set X contains m samples, and the dimension of each sample is n: $X=\left\{{x}_{1},{x}_{2},\cdots ,{x}_{m}\right\}$ , ${x}_{i}=\left({x}_{i1},{x}_{i2},\cdots ,{x}_{in}\right)\in {R}^{n}$ .

Representing each sample as a row, the $m\times n$ sample matrix S is the stack of all such rows, and $S\in {R}^{m\times n}$ , then the samples are processed by zero mean, that is, the samples are centralized, to ensure that the average value of each dimension of the matrix is zero.

$\stackrel{\xaf}{x}=\frac{1}{m}{\displaystyle \underset{i=1}{\overset{m}{\sum}}{x}_{i}}$ (1)

${\stackrel{\u02dc}{x}}_{i}={x}_{i}-\stackrel{\xaf}{x}$ (2)

The sample matrix composed of ${\stackrel{\u02dc}{x}}_{i}$ is denoted as $\stackrel{\u02dc}{S}$ , where $\stackrel{\u02dc}{S}\in {R}^{m\times n}$ , then the covariance matrix C of $\stackrel{\u02dc}{S}$ is obtained as follows:

$C=\frac{1}{m}{\stackrel{\u02dc}{S}}^{\text{T}}\stackrel{\u02dc}{S}=\frac{1}{m}{\displaystyle \underset{i}{\overset{m}{\sum}}{\stackrel{\u02dc}{x}}_{{}_{i}}^{\text{T}}{\stackrel{\u02dc}{x}}_{i}}$ (3)

where
$C$ is a real symmetric matrix, and
$C\in {R}^{n\times n}$ .
${\stackrel{\u02dc}{x}}_{{}_{i}}^{\text{T}}$ and
${\stackrel{\u02dc}{S}}^{\text{T}}$ are the transpose of
${\stackrel{\u02dc}{x}}_{i}$ and
$\stackrel{\u02dc}{S}$ , respectively. According to the matrix theory, a real symmetric matrix can be diagonalized, therefore there is an orthogonal matrix P that meets
${P}^{\text{T}}CP=\Lambda $ . The following process is applied to obtain the matrix P. Firstly the matrix C is decomposed to get the diagonal matrix
$\Lambda $ and the orthogonal matrix P. Obviously,
$P\text{,}\Lambda \in {R}^{n\times n}$ . Then a new diagonal matrix
${\Lambda}_{1}$ is composed of the first k (k < n) largest eigenvalues of the matrix
$\Lambda $ , and a new orthogonal matrix P_{1} is composed of the k eigenvectors, which correspond to the above k eigenvalues. The k eigenvectors are the principal components obtained by the PCA.

If ${P}_{1}^{\text{T}}C{P}_{1}={\Lambda}_{1}$ , then

${P}^{\text{T}}\left(\frac{{\stackrel{\u02dc}{S}}^{\text{T}}\stackrel{\u02dc}{S}}{m}\right)P=\frac{{\left(\stackrel{\u02dc}{S}P\right)}^{\text{T}}\stackrel{\u02dc}{S}P}{m}=\Lambda $ (4)

$\frac{{\left(\stackrel{\u02dc}{S}{P}_{1}\right)}^{\text{T}}\stackrel{\u02dc}{S}{P}_{1}}{m}={\Lambda}_{1}$ (5)

Let ${S}_{1}=\stackrel{\u02dc}{S}{P}_{1}$ , (5) is deformed into

$\frac{1}{m}{S}_{{}_{1}}^{\text{T}}{S}_{1}={\Lambda}_{1}$ (6)

$\stackrel{^}{S}={S}_{1}{P}_{1}^{\text{T}}$ (7)

As known from (6), the covariance of the dimensions is zero in the matrix S_{1}. Each row of the matrix
$\stackrel{\u02dc}{S}$ is a sample. Furthermore each column of the matrix P_{1} is an eigenvector, and the k eigenvectors of the matrix P_{1} are orthogonal to each other. So,
$\stackrel{\u02dc}{S}{P}_{1}$ is the equivalent to linear transformation of each sample of
$\stackrel{\u02dc}{S}$ in the basis of column vectors in P_{1}. After the transform, each row vector of S_{1} is completely irrelevant, and the dimension of each sample is k, where
${S}_{1}\in {R}^{m\times k}$ . If k < n, then the operation of dimension reduction is completed, while the internal structure of the original data is preserved with the maximum probability. Finally the matrix
$\stackrel{\u02dc}{S}$ can be approximately recovered by (7).

3. PQ Data Compression via IPCA

This section presents a methodology that allows the PQ data compression in smart distribution systems.

3.1. Data Matrix

PQ data from smart distribution systems, need to be acquired and compressed, then transmitted through the communication network to the server of the control center for further analysis. Let the acquired data be put in the form of a matrix X, shown in Figure 1. It is convenient to represent the data, and be easily used for data compression. Here each row of X is taken from a distributed measurement point at each time instant.

3.2. Data Compression

After the centralization processing of the matrix X, the eigenvectors corresponding to the first k eigenvalues of the covariance matrix C are calculated as the principal components by PCA algorithm. The eigenvectors corresponding to the smaller n-k eigenvalues are eliminated, and the remaining ones are constructed to the matrix P_{1}. Then the matrix
$\stackrel{\u02dc}{S}$ is transformed into the matrix S_{1}, which the CR is n:k. The

Figure 1. Data matrix X.

original data can be reconstructed using (7), and the mean square error of the reconstructed data is equal to the sum of the eliminated n-k eigenvalues. However, the main difficulty of data compression based on PCA algorithm is to find the eigenvalues and eigenvectors of the covariance matrix.

At present there are two kinds of conventional methods: one is firstly uses of $\left|\lambda I-A\right|=0$ to calculate the eigenvalues of the covariance matrix A, then uses $\left(\lambda I-A\right)x=0$ to calculate all the eigenvectors corresponding to the eigenvalues. Because the size of A is generally large, the computation of this method is very large and time-consuming. So it is not suitable for data compression. Another method is realized by using neural network (NN) method. This method is simpler than the first method, and does not need to compute the covariance matrix. Taking the 32 samples as an example, the construction of the single-layer NN is illustrated in Figure 2.

The weights of the network are iteratively adjusted and the iterative equation is as follows:

$y\left(k\right)={\displaystyle \underset{i=1}{\overset{32}{\sum}}{w}_{i}}\left(k\right){x}_{i}\left(k\right)={w}^{\text{T}}\left(k\right)x\left(k\right)$ (8)

$w\left(k+1\right)=w\left(k\right)+\mu \cdot \left[y\left(k\right)x\left(k\right)-{y}^{2}\left(k\right)w\left(k\right)\right]$ (9)

From Literature [17] , finally the w converges to the eigenvector corresponding to the maximum eigenvalue, but its convergence speed is closely related to the

learning factor $\mu $ . Only while $\mu {\lambda}_{1}\le \frac{3\sqrt{2}-2}{2}\approx 0.8899$ ( ${\lambda}_{1}$ is the maximum

eigenvalue of C), (9) can converge to the eigenvector corresponding to the maximum eigenvalue. And while $\mu {\lambda}_{1}=0.618$ , the convergence speed of (9) is the fastest.

But ${\lambda}_{1}$ is unknown, so it may lead to poor estimation of the learning factor $\mu $ . If the estimated $\mu $ is too small, it will result in slow learning speed. In contrast, if the estimated $\mu $ is too large, it will results in divergence of (9). Therefore the NN method has limitations for the real applications.

In order to solve the above problems, this paper proposes a new method for finding the eigenvectors of the covariance matrix. Firstly, prove a theorem as follows:

Figure 2. Single-layer neural network.

Theorem 1. If

$w\left(k\right)=\frac{Aw\left(k-1\right)}{\Vert w\left(k-1\right)\Vert}\text{\hspace{0.05em}},\text{\hspace{0.05em}}{\rm A}\in {R}^{n\times n}$ (10)

where A is a nonnegative symmetric matrix, $\forall w\left(0\right)\in {R}^{n\times 1}$ , $w\left(0\right)$ is not perpendicular to the eigenvector corresponding to the largest eigenvalue of A, then

$\underset{k\to +\infty}{\mathrm{lim}}\frac{w\left(k\right)}{\Vert w\left(k\right)\Vert}\in \left\{e|e\text{\hspace{0.05em}}\text{\hspace{0.05em}}i\text{stheeigenvectorcorrespondingtothelargesteigenvalueof}A\right\}$ (11)

Proof.

$\because $ A is a nonnegative symmetric matrix, where $A\in {R}^{n\times n}$ .

$\therefore \text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\exists \text{\hspace{0.05em}}\text{\hspace{0.05em}}P=\left({p}_{1},{p}_{2},\cdots ,{p}_{n}\right),\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{p}_{i}\in {R}^{n\times 1},\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\Vert {p}_{1}\Vert =1,$ where ${p}_{i}$ is the eigenvector corresponding to the eigenvalue ${\lambda}_{i}$ , and any two vectors in P are orthogonal. The matrix P satisfies:

${P}^{\text{T}}CP=\Lambda $ (12)

then $\forall w\left(0\right)\in {R}^{n\times 1}$ can be linearly expressed by ${p}_{i}$ as follows:

$w\left(0\right)={a}_{1}{p}_{1}+{a}_{2}{p}_{2}+\cdots +{a}_{i}{p}_{i}+\cdots +{a}_{n}{p}_{n}$ (13)

Since $w\left(0\right)$ is not perpendicular to the eigenvector corresponding to the largest eigenvalue of A, then ${a}_{1}\ne 1$ .

$w\left(k\right)=A\frac{w\left(k-1\right)}{\Vert w\left(k-1\right)\Vert}\text{\hspace{0.05em}}\Rightarrow w\left(k\right)=\frac{{A}^{k}w\left(0\right)}{\Vert {A}^{k-1}w\left(0\right)\Vert}$ (14)

$w\left(k\right)={A}^{k}\left({a}_{1}{p}_{1}+\cdots +{a}_{n}{p}_{n}\right)\frac{1}{\Vert {A}^{k-1}w\left(0\right)\Vert}$ (15)

$w\left(k\right)={\lambda}_{1}^{k}\left({a}_{1}{p}_{1}+\cdots +{a}_{n}{\left(\frac{{\lambda}_{n}}{{\lambda}_{1}}\right)}^{k}{p}_{n}\right)\frac{1}{\Vert {A}^{k-1}w\left(0\right)\Vert}$ (16)

Because A is nonnegativedefinite matrix, let ${\lambda}_{1}={\lambda}_{2}=\cdots ={\lambda}_{i}>{\lambda}_{i+1}\ge \cdots \ge {\lambda}_{n}\ge 0$ , then $\underset{k\to +\infty}{\mathrm{lim}}{\left(\frac{{\lambda}_{m}}{{\lambda}_{1}}\right)}^{k}=0,\text{\hspace{0.05em}}\text{\hspace{0.05em}}m=i+1,\cdots ,n.$

Let $\beta ={a}_{1}{p}_{1}+\cdots +{a}_{i}\frac{{\lambda}_{i}^{k+1}}{{\lambda}_{1}^{k+1}}{p}_{i}+\cdots +{a}_{n}\frac{{\lambda}_{n}^{k+1}}{{\lambda}_{1}^{k+1}}{p}_{n}$ , then $\underset{k\to +\infty}{\mathrm{lim}}\frac{w\left(k\right)}{\Vert w\left(k\right)\Vert}=\underset{k\to +\infty}{\mathrm{lim}}\frac{\beta}{\Vert \beta \Vert}$

$\because \underset{k\to +\infty}{\mathrm{lim}}\beta ={a}_{1}{p}_{1}+{a}_{2}{p}_{2}+\cdots +{a}_{i}{p}_{i},\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}i<n$

$\therefore \underset{k\to +\infty}{\mathrm{lim}}\frac{w\left(k\right)}{\Vert w\left(k\right)\Vert}=\underset{k\to +\infty}{\mathrm{lim}}\frac{\beta}{\Vert \beta \Vert}=\frac{{a}_{1}{p}_{1}+{a}_{2}{p}_{2}+\cdots +{a}_{i}{p}_{i}}{\Vert {a}_{1}{p}_{1}+{a}_{2}{p}_{2}+\cdots +{a}_{i}{p}_{i}\Vert}$ (17)

where ${p}_{1},{p}_{2},\cdots {p}_{i}$ are the eigenvectors corresponding to the largest eigenvalue ${\lambda}_{1}$ , so the sum of them multiplied by a scalar is still the eigenvector corresponding to the largest eigenvalue ${\lambda}_{1}$ of the matrix A.

The covariance matrix C of PQ data can meet the above conditions of the theorem. This method is to calculate principal components of the covariance matrix: firstly calculate the eigenvector ${e}_{1}$ corresponding to the largest eigenvalue ${\lambda}_{1}$ using (10), then calculate the eigenvector ${e}_{2}$ , because of $A={\displaystyle \underset{i=1}{\overset{n}{\sum}}{\lambda}_{i}}{e}_{i}{e}_{i}^{\text{T}}$ , let $B=A-{\lambda}_{1}{e}_{1}{e}_{1}^{\text{T}},\left({\lambda}_{1}={e}_{1}^{\text{T}}A{e}_{1}\right)$ , so the eigenvalues of B are sorted by size to ${\lambda}_{2}\ge \cdots \ge {\lambda}_{k}\ge \cdots \ge {\lambda}_{n}\ge {\lambda}_{n+1}=0$ . It can be seen that if A is replaced by B, the second principal component can be calculated, and then the other principal components can be calculated by this method in turn.

The following simplified steps are applied to PQ data compression based on the IPCA:

Step 1. Get PQ data.

Step 2. Calculate the mean of PQ data using (1).

Step 3. Subtract the mean from PQ data using (2).

Step 4. Construct the covariance matrix of the subtract data using (3).

Step 5. Calculate eigenvectors and eigenvalues of the covariance matrix using the iterative method.

Step 6. Choose principal components and preserve the k (the desired number) principal components which correspond to the larger eigenvalues.

Step 7: Quantization and encoding: preserve the quantized principal components and their indices as the compressed coefficients.

The PQ data compression scheme based on the PCA algorithm is shown in Figure 3.

4. Data Test, Discussion and Comparison

IPCA and discrete wavelet transform (DWT), both methods are capable of multichannel PQ data compression. PQ data are collected from various measurement points in the smart distribution system to test the methods. The PQ data are sampled at 12.8 kHz and quantized with 16 bits. Here there are 32-channel PQ data, and each channel data consists of 1536 samples. The tested PQ matrix is formed as ${R}^{m\times n}$ , where m is the number of sample channels and n is the number of samples of each channel PQ data. The CR of the proposed method is

Figure 3. PQ data compression scheme based on the PCA algorithm.

given as (18). By determining different number of principal components, different CRs can be obtained.

$CR=\frac{\text{Numberofdatawithoutcompression}}{\text{Numberofdataaftertransformation}}$ (18)

Mean absolute error (MAE) and mean percentage error (MPE) shown as (19) and (20) respectively are used to evaluate the reconstruction accuracy of PQ data.

$MAE=\frac{1}{n}{\displaystyle \underset{i=1}{\overset{n}{\sum}}\left|{x}_{i}-{\stackrel{^}{x}}_{i}\right|}$ (19)

$MPE=\frac{1}{n}{\displaystyle \underset{i=1}{\overset{n}{\sum}}\left|\frac{{x}_{i}-{\stackrel{^}{x}}_{i}}{{x}_{i}}\right|}\times 100$ (20)

where ${\stackrel{^}{x}}_{i}$ is the reconstructed data corresponding to the original data ${x}_{i}$ of each channel.

In order to compare the performance of data compression using the IPCA, DWT is proposed to carry out for the compression of the same dataset. Table 1 shows results obtained with the IPCA and with Debaucheries 4 wavelet (db4) and four levels of decomposition [7] . Different thresholds have been set, aiming to retain the number of wavelet coefficients that would result in the same CRs shown for the IPCA.

It can be seen from Table 1 that the IPCA is capable of achieving better tradeoff for higher CRs. The MAE and MPE of reconstructed data by the IPCA are lower than those of reconstructed data by the DWT. So the performance of data compression using the IPCA is better than that using the DWT.

Figure 4 shows the original PQ data of the first three channels. Figures 5-8 illustrate the decompressed PQ data of the first three channels, which use the extraction of 2, 4, 8 and 16 principal components by the IPCA.

It can be found from Figures 5-8 that while the 2 or 4 or 8 eigenvectors are retained, there is no difference in visual reconstruction effect of the original PQ data. The CR can be as high as 8, and the more the principal components are extracted, the better the reconstruction effect is. But when the CR arrives with 16, it may lead to serious distortion of the reconstructed data.

Table 1. Performances comparison of IPCA and DWT (db4).

Figure 4. The original PQ data of the first three channels.

Figure 5. The reconstructed PQ data of the first three channels (CR = 2).

Figure 9 shows a comparison of the number of iterations for calculating their principal components of the tested PQ matrix by the IPCA and NN-PCA. The horizontal axis is the number of eigenvectors and the vertical axis is the number of iterations. It can be found from Figure 9 that, in the same number of principal components extracted, the

Figure 6. The reconstructed PQ data of the first three channels (CR = 4).

Figure 7. The reconstructed PQ data of the first three channels (CR = 8).

Figure 8. The reconstructed PQ data of the first three channels (CR = 16).

Figure 9. Comparison about the number of iterations by the IPCA and NN-PCA.

number of iterations obtaining principal components by the proposed method is obviously less than that by the NN-PCA. So, the efficiency by the IPCA is better than that by the NN-PCA.

5. Conclusions

In summary, the benefits of PCA algorithm are used to reduce the redundancy of data. Because PCA algorithm is the optimal transform with the minimum mean square error, according to the requirements, the larger eigenvalues are reserved, and the smaller eigenvalues are omitted to reduce the dimensionality, simplify the model or compress the data. With these characteristics, PCA algorithm can be applied to be good for data compression. This paper proposes a multichannel PQ data compression algorithm via IPCA. PQ data is preprocessed to form the matrix. Then IPCA is used to compress the matrix and yields the compressed data. Field PQ data tests validate that the proposed method is characterized with high CR, accurate reconstruction, and low computation complexity. And the iterative method is especially easy to be programmed in computer.

Because the test data is not particularly sufficient, and there are differences of the covariance matrices of the original data, the number of iterations will appear very different using the proposed method. The number of iterations has a great relationship with the construction of the initial vector. How to construct the initial vector and reduce the number of iterations according to the different PQ data needs further study.

Acknowledgements

This work is supported by National Natural Science Foundation of China (No. 51477124).

Conflicts of Interest

The authors declare no conflicts of interest.

[1] |
Heydt, G.T. (2010) The Next Generation of Power Distribution Systems. IEEETransactions on Smart Grid, 1, 225-335. https://doi.org/10.1109/TSG.2010.2080328 |

[2] |
McBee, K.D. and Simoes, M.G. (2012) Utilizing a Smart Grid Monitoring System to Improve Voltage Quality of Customers. IEEE Transactions on Smart Grid, 3, 738-743. https://doi.org/10.1109/TSG.2012.2185857 |

[3] |
Li, S. and Wang, X. (2015) Cooperative Change Detection for Voltage Quality Monitoring in Smart Grids. IEEE Transactions on Information Forensics and Security, 11, 86-99. https://doi.org/10.1109/TIFS.2015.2477796 |

[4] |
Tcheou, M.P. and Lovisolo, L. (2014) The Compression of Electric Signal Waveforms for Smart Grid: State of the Art and Future Trend. IEEE Transactions on Smart Grid, 5, 291-304. https://doi.org/10.1109/TSG.2013.2293957 |

[5] |
Cormane, J. and Astonishment, F. (2015) Spectral Shape Estimation in Data Compression for Smart Grid Monitoring. IEEE Transactions on Smart Grid, 7, 1214-1221. https://doi.org/10.1109/TSG.2015.2500359 |

[6] |
Unterweger, A. and Engel, D. (2015) Resumable Load Data Compression in Smart Grids. IEEE Transactions on Smart Grid, 6, 919-929.
https://doi.org/10.1109/TSG.2014.2364686 |

[7] |
Santo, S., Powers, E.J. and Grady, W.M. (1997) Power Quality Disturbance Data Compression Using Wavelet Transform Methods. IEEE Transactions on Power Delivery, 12, 1250-1256. https://doi.org/10.1109/61.637001 |

[8] |
Meher, S.K., Pradhan, A.K. and Panda, G. (2004) An Integrated Data Compression Scheme for Power Quality Events Using Spline Wavelet and Neural Network. Electric Power System Research, 69, 213-220. https://doi.org/10.1016/j.epsr.2003.10.001 |

[9] |
Norman, C.F.T. and John, Y.C.C. (2012) Real-Time Power-Quality Monitoring with Hybrid Sinusoidal and Lifting Wavelet Compression Algorithm. IEEE Transactions on Power Delivery, 27, 1718-1726. https://doi.org/10.1109/TPWRD.2012.2201510 |

[10] |
Ning, J., Wang, J., Gao, W. and Liu, C. (2011) A Wavelet-Based Data Compression Technique for Smart Grid. IEEE Transactions on Smart Grid, 2, 212-218.
https://doi.org/10.1109/TSG.2010.2091291 |

[11] |
Zhang, M., Li, K.C. and Hu, Y.S. (2011) A High Efficient Compression Method for Power Quality Applications. IEEE Trans-actions on Instrumentation and Measurement, 60, 1976-1985. https://doi.org/10.1109/TIM.2011.2115590 |

[12] |
He, S.F., Zhang, M., Tian, W., Zhang, J. and Ding, F. (2015) A Parameterization Power Data Compress Using Strong Trace Filter and Dynamics. IEEE Transactions on Instrumentation and Measurement, 64, 2636-2645.
https://doi.org/10.1109/TIM.2015.2416451 |

[13] | Souza, J.C.S. de, Assis, T.M.L. and Pal, B.C. (2015) Data Compression in Smart Distribution Systems via Singular Value Decomposition. IEEE Transactions on Smart Grid, 6, 275-284. |

[14] |
Draper, B.A., Baek, K., Bartlett, M.S. and Beveridgea, J.R. (2003) Recognizing Faces with PCA and ICA. Computer Vision and Image Understanding, 91, 115-137.
https://doi.org/10.1016/S1077-3142(03)00077-8 |

[15] |
Ding, Q. and Kolaczyk, E.D. (2010) A Compressed PCA Subspace Method for Anomaly Detection in High-Dimensional Data. IEEE Transactions on Information Theory, 59, 7419-7419. https://doi.org/10.1109/TIT.2013.2278017 |

[16] |
Liu, Y. and Pados, D.A. (2016) Compressed-Sensed-Domain L1-PCA Video Surveillance. IEEE Transactions on Multimedia, 18, 351-363.
https://doi.org/10.1109/TMM.2016.2514848 |

[17] |
Yi, Z, Ye, M., Lv, J.C. and Tan, K.K. (2005) Convergence Analysis of a Deterministic Discrete Time System of Oja’s PCA Learning Algorithm. IEEE Transactions on Neural Network, 16, 1318-1328. https://doi.org/10.1109/TNN.2005.852236 |

Journals Menu

Contact us

+1 323-425-8868 | |

customer@scirp.org | |

+86 18163351462(WhatsApp) | |

1655362766 | |

Paper Publishing WeChat |

Copyright © 2024 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.