^{1}

^{*}

^{2}

^{*}

To enhance the clustering ability of self-organization network, this paper introduces a quantum inspired self-organization clustering algorithm. First, the clustering samples and the weight values in the competitive layer are mapped to the qubits on the Bloch sphere, and then, the winning node is obtained by computing the spherical distance between sample and weight value. Finally, the weight values of the winning nodes and its neighborhood are updated by rotating them to the sample on the Bloch sphere until the convergence. The clustering results of IRIS sample show that the proposed approach is obviously superior to the classical self-organization network and the K-mean clustering algorithm.

Since Kak [

In all of the above models, the fusion of quantum computing and supervised neural networks has been widely studied. However the fusion of quantum computing and unsupervised self-organizing neural network is relatively few. In the classical clustering algorithms, Cai et al. [

In the quantum computing, a qubit is a two-level quantum system which could be described in two-dimension complex Hilbert space. According to principle of superposition, a qubit can be defined as

where

Notation like

By definition, a Bloch vector is a vector whose components

In this work, we adjust the weights of competition layer by rotating them around an axis towards the target qubit on the Bloch sphere. This rotation can simultaneously change two parameters

Let

Let the Bloch coordinates of

From the principles of quantum computing, on the Bloch sphere a rotation through an angle

where

Hence, on the Bloch sphere, a rotation through an angle

and the rotation operation can be written as

From the principles of quantum computing, the coordinates x, y, and z of a qubit on the Bloch sphere can be measured by the Pauli operators using the following equations.

We propose the quantum self-organization neural networks model based on the Bloch spherical rotation are shown in

Let

values of the jth node in competition layer. By the projection measuring, the Bloch coordinates of

to 1, the spherical distance between

Hence, the distance between

First, all samples data are converted to [0, 1]. Let

where

Let sample after normalization be

At this point,

where

Let

For the lth sample

Suppose that the competition layer has C nodes, and that the node with a minimum distance is defined as the winning one. Hence, the winning node should satisfy the following equation

The self-organizing network is a typical unsupervised clustering model; it is suitable for solving the problem of don’t know the class number of clustering beforehand. Its training is completely different from the traditional BP neural networks. If a self-organizing network must apply supervised information in clustering, it is powerless for clustering problems with no supervision information available. The training process of our model does not contain any prior knowledge about samples classification results; otherwise will lose its generalization ability, which is the shortcoming of Ref. [

Step 1 Quantum state description of the sample. Convert the samples to qubit states by Equations (13)-(16). Measure the quantum samples by Equations (8)-(10) to obtain their Bloch coordinates.

Step 2 The weights of networks initialization. Initialize all the networks weights to randomly distribution of qubits on the Bloch sphere, as shown below.

where

Step 3 The parameters of networks initialization. Include: the maximum iterative steps

Step 4 Compute the current learning rate, neighborhood radius, and variance by the following equations.

Step 5 Measure all quantum weights in competition layer by Equations (8)-(10) to obtain their Bloch coordinates. For the lth sample

Step 6 For the lth sample

where ^{*}th node.

According to theorem, the rotation axis and rotation matrix of rotating

Then, the rotation operation can be written as

where

Step 7 If

In order to experimentally illustrate the effectiveness of the proposed BQSON, the IRIS samples are used to compare it with the Classical Self-Organization Network (CSON ), the K-mean clustering, the Nearest Neighbor Clustering (NNC) in this section. In these experiments, we perform and evaluate the BQSON in Matlab (Version 7.1.0 .246) on a Windows PC with 2.19 GHz CPU and 1.00 GB RAM. To enhance the impartiality of the comparison results, our BQSON has the same structure and parameters as the CSON in this experiment. The IRIS data set contains 150 four dimensional samples. The sample is divided into three classes, and each class contains 50 samples, such as setosa (1 - 50), versicolor (51 - 100), virginica (101 - 150).

Both BQSON and CSON have 4 input nodes and 100 competition nodes arranged in square matrix. Other parameters are set as follows:

If the clustering results do not change in 100 consecutive steps, we call algorithm reach convergence. For K-mean clustering, the K is set to 3, and if each of variations of class centers is less than

Considering the log likelihood function is more used in evaluation of the performance of the Bayesian classification network, and less used in clustering algorithm, therefore, we don’t use this index in our work. To facilitate comparison, two relevant concepts are defined as follows:

Precision Ratio Let the correct number of samples in the kth class after clustering be NPR, and the total number of samples in the kth class after clustering be NA. Precision Ratio is defined as follows

Recall Ratio Let the correct number of samples in the kth class after clustering be NPR, and the total number of samples in the kth class before clustering be NB, Recall Ratio is defined as follows

After 9797 iterative steps, the BQSON reaches convergence. All samples are divided into three classes, and each class contains 50 samples. The first class contains 50 “setosa” samples. The second class contains 48 “versicolor” samples and 2 “virginica” samples. The third class contains 48 “virginica” samples and 2 “versicolor” samples. The Precision Ratio and Recall Ratio of three class samples reach 100%, 96%, 96%, respectively.

After 10,000 iterative steps, the CSON does not reach convergence, where the first class contains 50 “setosa” samples, and for the rest of the 100 samples, the model is powerless. In addition, we running until 30000 iterative steps, the CSON is still not convergence.

For K-mean clustering, after 11 iterative steps, convergence is reached. The first class contains 50 “setosa” samples, the second class contains 61 samples where 47 samples are correct, and the third class contains 39 samples where 36 samples are correct. The Precision Ratio of three class samples reach 100%, 77.05%, 92.31%, respectively, and the Recall Ratio of three class samples reach 100%, 94%, 72%, respectively.

For NNC, All samples are divided into three classes. The first class contains 50 “setosa” samples, the second class contains 62 samples where 50 samples are correct, and the third class contains 38 samples where all 38 samples are correct. The Precision Ratio of three class samples reach 100%, 80.65%, 100%, respectively, and the Recall Ratio of three class samples reach 100%, 100%, 76%, respectively.

From the experimental results, it is clear that both Precision Ratio and Recall Ratio of BQSON are the highest in four algorithms. These results show that the BQSON is obviously superior not only to the CSON but to the K-mean and the NNC as well.

Next, we theoretically explain the above experimental results. First, BQSON adopted a new way to calculate the distance of nodes between input layer and competition layer. In the existing clustering algorithms, the distance measurement is generally taken the Euclidean distance, which this distance is calculated based on coordinates. In BQSON, however, the distance is obtained by calculating the Bloch spherical distance of each dimension between input samples and competition nodes. Let

where C denotes the number of nodes in competition layer, and L denotes the total number of samples.

For the normalized samples, in CSON, the difference of each dimension between sample and weight

The

Secondly, the BQSON adopted a new way of weight adjustment. In CSON, the vector differences between samples and weights are directly used to adjust the weighs, which is strongly influenced by learning rate, not easy to achieve fine adjustment. In BQSON, however, the weighs are adjusted by rotating them to a sample so as to approximate this sample. Due to the rotation is performed on the Bloch sphere, so, it may conduct a subtle adjustment of weights, which enhance the clustering ability of BQSON.

In this work, a quantum self-organization network clustering algorithm is proposed. In our approach, the weights of nodes in competition layer are updated by rotating qubits on the Bloch sphere. The comparative experiments of IRIS show that the clustering ability of proposed approach is significantly higher than the classic self-orga- nizing network. The Precision Ratio and Recall Ratio of BQSON increased by 7.5467% and 8.6667% more than those of K-mean and increased by 3.7833% and 5.3333% more than those of NNC. In addition, the BQSON is inefficient. It is also worth pointing out that, BQSON increases computing operations, such as the axis of rotation, rotation matrix and projection measurement, which lead to the increasing amount of calculation, prolonging running time, and reducing efficiency of clustering. However, the increase of these operations greatly improves

the clustering ability of BQSON. In other words, BQSON is at the cost of computing efficiency for enhancing clustering ability, which is consistent with no free lunch theorem. Hence, how to enhance the computing efficiency of BQSON is the subject of further research.

This work was supported by the National Natural Science Foundation of China (Grant No. 61170132), Natural Science Foundation of Heilongjiang Province of China (Grant No. F2015021), Science Technology Research Project of Heilongjiang Educational Committee of China (Grant No. 12541059), and Youth Foundation of Northeast Petroleum University (Grant No. 2013NQ119).