^{1}

^{*}

^{2}

^{*}

Induction motor (IM) is commonly used in various industrial applications. Reliable online IM health condition monitoring systems are critically needed in industries to improve operational accuracy and safety of the IMs and the machinery. A new evolving algorithm is proposed to provide more decision-making transparency, as well as better classification and processing efficiency. The effectiveness of the developed intelligent classifier is examined by simulation and experimental tests.

Induction motors (IMs) are widely used in industrial applications such as pumping stations, manufacturing facilities, electric vehicles, etc. IMs have simple construction and high efficiency relative to other types of motors [

An IM health condition monitoring system consists of three general modules: 1) data acquisition, 2) fault feature extraction, and 3) automatic diagnostic classification. This work focuses on the automatic diagnostic classification.

Pattern classification is a means to classify features obtained by appropriate signal processing techniques into different IM health categories. To automatically perform pattern classification, multiple soft-computing-based methods have been proposed in literature, such as support vector machines [

An adaptive neuro-fuzzy-inference-system (ANFIS), has a clear decision-making process by using linguistic fuzzy reasoning structure [

To solve the parameter and expertise-related problems of ANFIS-based classifiers, an evolving FIS system can be used for classification, where both system parameters and linguistic fuzzy reasoning structure are evolved iteratively. Such clustering can be achieved with evolutionary algorithms based on measures such as data potential (a measure of data density) [

Insufficient reasoning structures will reduce the interpretability of the diagnostic results, which in turn decrease the clarity of reasoning behind false or missed alarms.

To tackle the aforementioned problems, an evolving fuzzy (EF) classifier is developed in this work to integrate features from several fault detection techniques for a more reliable IM rotor bar fault diagnosis. It is new in the following aspects: 1) A new updated clustering algorithm is proposed for an evolving fuzzy system; 2) a new strategy is suggested to implement the proposed EF classifier for an IM rotor bar condition monitoring application.

The remainder of this paper is organized as follows. Section 2 discusses the development of the EF classifier. The effectiveness of the proposed technology is verified in Section 3. Some concluding remarks are summarized in Section 4.

A first-order Takagi-Kang-Sugeno (TSK-1) fuzzy inference architecture is selected as the reasoning platform in this proposed EF classifier, due to its ability in effective data modeling [

The procedure for training an EF system is illustrated in

The training procedures can be described in the following steps:

1) Cluster inputs with an evolutionary algorithm.

2) Project clusters into fuzzy MFs.

3) Process inputs with MFs to determine each rule’s firing strength.

4) Normalize all firing strengths, as a measure of the rules’ contributions to the final output.

5) Formulate an input matrix by multiplying normalized firing strengths and inputs into a TSK-1 model.

6) Update the TSK-1 consequent parameters using available training data pairs.

Assume that an input to the fuzzy classifier has the form x ( j , k ) , where j = 1 , 2 , ⋯ , J , and J is the number of dimensions or attributes of the input; k = 1 , 2 , ⋯ , K represents the instance of normalized datapoint inputs to the classifier system. Each output class (i.e., healthy motor, faulty motor) corresponds to one or more rules in the fuzzy system. As an illustration, consider a fuzzy classifier with only two dimensions or j = 1 , 2 . Then the ith rule, R j , can be represented as [

R i : IF [ x ( 1 , k ) is M F i ( 1 ) ] AND [ x ( 2 , k ) is M F i ( 2 ) ] THEN y ( k ) is O i ( k ) (1)

where O j ( k ) is the classifier output of rule R j ; M F i is a fuzzy MF representing a degree of belongingness of an input along the jth dimension. Details for obtaining the results of the fuzzy classifier will be discussed in the proceeding sections.

The clustering takes two steps: 1) evolve cluster centers, and 2) compute cluster spreads after evolution. The clustering in the input spaces can be achieved with an evolutionary algorithm based data potential, a measure of data density. Based on the fundamental definition of potential in [

P ( k ) = k { k [ 1 + ∑ j = 1 J [ x ( j , k ) ] 2 ] − 2 ∑ j = 1 J x ( j , k ) B ( k ) + D ( k ) } , (2)

where,

B ( k ) = ∑ a = 1 k = 1 x ( j , a ) + x ( j , k ) , (3)

and,

D ( k ) = ∑ a = 1 k = 1 ∑ j = 1 J [ x ( j , a ) ] 2 + ∑ j = 1 J [ x ( j , k ) ] 2 . (4)

B ( k ) and D ( k ) are variables representing the relationship between the previous datapoints up to k-1, and the present data point at k.

In cluster center identification, from Equation (2), the initial (i.e., the first) cluster center is established at the first datapoint. With subsequent datapoints, the potential of the mth existing cluster center can be recursively updated by:

P m ( k ) = k { k [ 1 + ∑ j = 1 J [ x ( j , c m ) ] 2 ] − 2 ∑ j = 1 J x ( j , c m ) B ( k ) + D ( k ) } , (5)

where x ( j , c m ) is the datapoint corresponding to a cluster center.

A new cluster center is established when the data potential at datapoint k, P ( k ) is larger than the data potential of any other existing cluster center, or ∃ m = 1 , 2 , ⋯ , P ( k ) ≥ P m ( k ) . Once the data potential and cluster centers of every datapoint have been computed, the spread is determined with an algorithm [

σ m = ∑ s ∑ j = 1 J [ x ( j , c m ) − x ( j , s ) ] 2 S d , (6)

where S d is the data scatter, or the number of datapoints that have the shortest Euclidean distance to a cluster center x ( j , c m ) . From the known training output data, each cluster center is assigned to its respective class, for example, the cluster “1” represents a healthy motor condition, cluster “2” represents a faulted motor condition, etc.

Once the initial cluster centers and spreads have been computed, post-processing is undertaken to generate a single representative cluster per rule.

To perform fuzzy reasoning, the inputs are fuzzified with Gaussian MFs, expressed as:

M F i ( j ) = exp ( − ( x ( j , k ) − x ( j , c m ) ) 2 2 σ m 2 ) ∈ ( 0 , 1 ] , (7)

where i = 1 , 2 , ⋯ , I is the rule associated with a cluster. From Equation (7), the MFs are derived from the cluster centers and spreads. To implement the fuzzy reasoning structure of Equation (1), the firing strength of the ith rule, w i ( k ) is as follows:

w i ( k ) = min { M F i ( 1 ) , M F i ( 2 ) , ⋯ , M F i ( J ) } , (8)

where the min operator is a fuzzy t-norm operator (e.g., AND) [

The output of the classifier system, O ( k ) , is computed as:

O ( k ) = ∑ i = 1 i = I { w ¯ i ( k ) f i [ x ( j , k ) ] } ; w ¯ i ( k ) = w i ( k ) / ∑ i = 1 I w i ( k ) , (9)

where w ¯ i ( k ) is the normalized firing strength, which represents the contribution of the firing strength of the ith rule to the output. f j is the TSK-1 consequent function of the ith rule, represented as:

f i [ x ( j , k ) ] = C i 0 + ∑ j = 1 J x ( j , k ) C i j , (10)

where C i 0 , C i 1 , ⋯ , C i J are the consequent parameters of the ith rule of the EF classifier. These unknown linear consequent parameters can be estimated by training. For example, if T ( k ) is the target of the kth, training data pair, then:

T ( k ) = ∑ i = 1 i = I { w ¯ i ( k ) f i [ x ( j , k ) ] } , (11)

which can be expanded to,

T ( k ) = [ w ¯ 1 ( k ) w ¯ 1 x ( 1 , k ) ⋯ w ¯ 1 x ⋯ w ¯ I ( k ) w ¯ I x ( 1 , k ) ⋯ w ¯ I x ( J , k ) ] [ C 10 C 11 ⋯ C 1 J ⋯ C I 0 C I 1 ⋯ C I J ] T ,(12)

Equation (12) can be represented in a matrix/vector form:

T → = Z C → , (13)

where T → , Z , and C → are the target vector, input matrix, and consequent parameter vector, respectively. Since Z is likely a non-square matrix and its inverse may not be computed directly, the singular value decomposition (SVD) will used to solve for C → . The SVD breaks down the Z matrix into three components:

Z = U D V T , (14)

where U and V are the respective left and right singular values, and D are the singular eigenvalues of Z . From Equation (14), the Moore-Penrose pseudo-inverse [

Z + = V D + U T , (15)

where D^{+} is the reciprocal of all non-zero elements of D. With Equation (15), the consequent parameters C → can then estimated by

Z + T → = Z + Z C → ≅ C → . (16)

It is noted that the entire training process is one-pass, without the need for a back propagation of error. Upon solving the consequent parameters of Equation (16), it is applied to Equation (9)-(10) for the testing inputs to determine the EF classifier’s output.

The effectiveness of the proposed EF classifier is first assessed with benchmark datasets [

2) Strict clustering: New clusters are formed when the data potential is larger than all existing cluster centers [

The classifiers evaluated will be represented as follows:

· Classifier #1: The proposed EF classifier using the proposed clustering algorithm and a loose clustering variant;

· Classifier #2: A comparison classifier, using the EF clustering algorithm and a strict clustering scheme;

· Classifier #3: A comparison classifier using a classical clustering algorithm and a loose clustering scheme;

· Classifier #4: A comparison classifier using a classical clustering algorithm and a strict clustering scheme.

The Iris and wine datasets have four and thirteen inputs respectively, representing pertinent physical measurements, with more details found in Reference [

Dataset | Number of classifications | Number of attributes | Total Number of datapoints used |
---|---|---|---|

Iris [ | 3 | 4 | 150 |

Wine [ | 3 | 13 | 144 |

The data and clustering across all attributes in one representative trial with the iris dataset are shown in

Results across the simulated datasets are shown in

Evolving Fuzzy Classifier Variant | Dataset | Average | ||
---|---|---|---|---|

Iris | Wine | |||

Classifier #1 Proposed (loose clustering) | Accuracy (%) | 99.81 | 94.92 | |

No. of clusters (average) | 2.99 | 2.95 | ||

Classes represented (%) | 99.67 | 98.33 | ||

Time/Sample (us) | 84.95 | 91.74 | ||

Classifier #2 Comparison (strict clustering) | Accuracy (%) | 99.97 | 90.89 | |

No. of clusters (average) | 2.02 | 1.27 | ||

Classes represented (%) | 67.33 | 42.33 | ||

Time/Sample (us) | 78.01 | 84.78 | ||

Classifier #3 Comparison (loose clustering) | Accuracy (%) | 99.67 | 97.25 | |

No. of clusters (average) | 26.10 | 18.81 | ||

Classes represented (%) | 99.33 | 95.67 | ||

Time/Sample (us) | 119.95 | 127.79 | ||

Classifier #4 Comparison (strict clustering) | Accuracy (%) | 99.92 | 91.06 | |

No. of clusters (average) | 5.26 | 3.66 | ||

Classes represented (%) | 66.67 | 41.00 | ||

Time/Sample (us) | 89.02 | 95.58 |

From these processing results, the accuracy of the proposed Classifier #1 is comparable to that of Classifiers #2, #3 and #4, but with additional significant advantages. Although Classifier #2 takes less processing times than Classifier #1, due to its the generation of fewer clusters, its resulting clusters do not sufficiently represent the classes, with much lower class representation. As a result, Classifier #2 does not have a fully transparent fuzzy rule base for decision making. Furthermore, the proposed Classifier #1 outperforms Classifiers #3 and #4, in terms of class representation, with a significant advantage of having a more transparent rule base. In addition, Classifier #1 has lower processing times than #3 and #4 due to having fewer clusters.

The proposed EF Classifier #1 demonstrates improvement in terms of processing time and class representation of the clusters, making it more suitable for an IM condition monitoring application as will be discussed in the next section. Hence, the proposed EF classifier has been successfully validated with simulation results.

Five fractional horsepower IMs are tested, with varying defects: healthy #1, healthy #2, 1-bar, 2-bar, 3-bar fault tested under 4 loading conditions: decoupled, low, medium, and heavy load. These defects are representative of a motor with a gradually worsening state prior to reaching a catastrophic failure condition. In addition, the tests are conducted under 2 line frequencies of 50 Hz and 60 Hz, respectively. Data acquisition is performed with split-core current transformers with a custom-built wireless data acquisition system sampling 64,000 samples at 1 kHz. The fault feature extraction is based on the current spectrum analysis, involving sidebands to the fundamental line frequency [

To serve as the inputs to the EF classifier, monitoring indices are created on the signal-to-noise ratio of the extracted fault features, where a higher index indicates a higher severity of a faulted condition. An example of such a monitoring index is illustrated in

In addition, it can be noted that the influence of noise of this EF classifier for this application has been mitigated by two factors: 1) the long data acquisition period that effectively averages out the influences of non-periodic noise signals, and 2) the clustering algorithm, where an outlier data input would have an insufficient data potential to form the cluster required to influence the fuzzy reasoning of the EF classifier.

The data and clustering across all attributes in one representative trial with the 60 Hz rotor bar dataset are shown in

Dataset | Number of classifications | Number of Attributes | Total Number of datapoints used |
---|---|---|---|

Rotor Bar, 50 Hz | 5 | 3 | 1000 |

Rotor Bar, 60 Hz | 5 | 3 | 1000 |

Results across the implementation datasets are summarized in

For these processing results, it is seen that the accuracy of the proposed EF Classifier #1 performs the best in comparison with all the other three classifiers. This can be attributed to the new clustering algorithm’s improved tracking to changing data. Although Classifier #2 has faster average processing speed than Classifier #1, due to fewer clusters. As a result, Classifier #2 does not have a fully transparent fuzzy rule base for decision making. Likewise, the proposed EF Classifier #1 outperforms Classifiers #3 and #4, in terms of class representation of the clusters and the number of generated clusters; it also has a significant advantage of having a more transparent rule base and faster processing efficiency due to having less clusters.

In summary, the proposed Classifier #1 has the best classification accuracy, improved processing time efficiency and class representation. With better transparent cluster representation, missed and false alarms in a diagnostic application can be investigated upon, where it is possible to track the monitoring indices responsible for an incorrect classifier output.

Evolving Fuzzy Classifier Variant | Dataset | Average | ||
---|---|---|---|---|

50 Hz | 60 Hz | |||

Classifier #1 Proposed (loose clustering) | Accuracy (%) | 99.37 | 99.45 | |

No. of clusters (average) | 5.00 | 4.75 | ||

Classes represented (%) | 100.00 | 95.00 | ||

Time/Sample (us) | 19.79 | 16.80 | ||

Classifier #2 Comparison (strict clustering) | Accuracy (%) | 92.30 | 88.47 | |

No. of clusters (average) | 2.86 | 2.00 | ||

Classes represented (%) | 57.20 | 40.00 | ||

Time/Sample (us) | 14.20 | 13.18 | ||

Classifier #3 Comparison (loose clustering) | Accuracy (%) | 98.90 | 98.90 | |

No. of clusters (average) | 115.02 | 106.99 | ||

Classes represented (%) | 100.00 | 87.60 | ||

Time/Sample (us) | 148.93 | 126.73 | ||

Classifier #4 Comparison (strict clustering) | Accuracy (%) | 94.44 | 86.25 | |

No. of clusters (average) | 9.13 | 8.47 | ||

Classes represented (%) | 58.40 | 40.00 | ||

Time/Sample (us) | 23.54 | 21.59 |

The processing time for each sample across all datasets is in the order of tens of microseconds. This is significantly faster than the generation of the inputs, which require processes such as data acquisition, signal processing, and monitoring index generation. Hence, this classifier demonstrates suitability for use in real industrial condition monitoring applications.

For this application of the proposed EF classifier to IM condition monitoring the following are assumed: 1) The inputs are representative of the condition of the motor being monitored, 2) During the training process, there are known outputs corresponding to given inputs, so that the consequent parameters of Equation (16) can be estimated.

The developed EF classifier could have the following limitations to be improved: 1) The monitoring indices are based on the signal-to-noise ratio (SNR) of representative features, which could vary with the use of the data acquisition system and motor dynamics. 2) When the developed EF classifier is used in a new application, some test data are needed to update the initial classification architecture.

An EF classifier has been developed in this work for IM health condition monitoring. A new updated evolutionary clustering algorithm is proposed for fuzzy inference reasoning and to formulate a rule structure accounting for multiple clusters belonging to different output classes. Its effectiveness is verified by simulation tests using some benchmark datasets. In addition, the EF classifier is implemented for IM rotor bar fault diagnosis. Test results have shown that the developed EF classifier has improved classification accuracy, processing efficiency, and ability to produce distinct fuzzy clusters/rules to clearly indicate the reasoning process behind every classification. Due to these factors, the developed intelligent IM monitoring system has the potential to be used in real industrial predictive maintenance applications.

The authors declare no conflicts of interest regarding the publication of this paper.

Luong, P. and Wang, W. (2019) An Evolving Fuzzy Classifier for Induction Motor Health Condition Monitoring. Intelligent Control and Automation, 10, 129-141. https://doi.org/10.4236/ica.2019.104009