Clustering Algorithm of Quantum Self-Organization Network

To enhance the clustering ability of self-organization network, this paper introduces a quantum inspired self-organization clustering algorithm. First, the clustering samples and the weight values in the competitive layer are mapped to the qubits on the Bloch sphere, and then, the winning node is obtained by computing the spherical distance between sample and weight value. Finally, the weight values of the winning nodes and its neighborhood are updated by rotating them to the sample on the Bloch sphere until the convergence. The clustering results of IRIS sample show that the proposed approach is obviously superior to the classical self-organization network and the K-mean clustering algorithm.


Introduction
Since Kak [1] firstly proposed the concept of quantum inspired neural computation in 1995; quantum neural network (QNN) has attracted great attention by the international scholars during the past decade, and a large number of novel techniques have been studied for quantum computation and neural network.For example, Gopathy et al. [2] proposed the model of quantum neural network with multilevel hidden neurons based on the superposition of quantum states in the quantum theory.Michail et al. [3] attempted to reconcile the linear reversible structure of quantum evolution with nonlinear irreversible dynamics of neural network.Michiharu et al. [4] presented a novel learning model with qubit neuron according to quantum circuit for XOR problem and described the influence to learning by reducing the number of neurons.Gupta et al. [5] defined a new mathemati-cal model of quantum neural network, building on Deutsch's model of quantum computational network, which provides an approach for building scalable parallel computers.Fariel [6] proposed the neural network with the quantum gated nodes, and indicated that such quantum network may contain more advantageous features from the biological systems than the regular electronic devices.In our previous works [7], we proposed a quantum BP neural network model with learning algorithm based on the single-qubit rotation gates and two qubits controlled-rotation gates.Next, we proposed a neural network model with quantum gated nodes and a smart algorithm for it [8], which shows superior performance in comparison with a standard error back propagation network.Adenilton et al. [9] proposed a weightless model based on quantum circuit.It is not only quantum-inspired but also actually a quantum NN.This model is based on Grover's search algorithm, and it can both perform quantum learning and simulate the classical models.At present, the fusion of quantum computation and neural computation is gradually becoming a new research direction.
In all of the above models, the fusion of quantum computing and supervised neural networks has been widely studied.However the fusion of quantum computing and unsupervised self-organizing neural network is relatively few.In the classical clustering algorithms, Cai et al. [10] proposed a new algorithm called K-Distributions for Clustering Categorical Data, and Huang [11] investigated clustering problem of large data sets with mixed numeric and categorical values.As it is known to all, unsupervised clustering is the only function of the selforganizing network.For self-organizing network, unsupervised clustering process, in essence, is the application process of the network.This is very different from BP network which must perform a supervised training process before application.Although we proposed quantum self-organizing networks with quantum inputs and quantum weights [12], this model applied the supervised mode to training, which severely reduces its generalization ability.In addition, although quantum computing effectively enhances the performance of the traditional self-organizing networks, the fusion research of quantum computation and neural computation is still far from mature.It is necessary to further research a new way of integration between them, in order to further improve the performance of neural computation.Hence, we proposed a quantum self-organization network based on Bloch spherical rotation (BQSON), and designed its clustering algorithm in detail.In our approach, both the samples and the weights are denoted by qubits described in Bloch sphere; the weights of the competition winning node and its neighbourhood nodes are adjusted by rotating these qubits to corresponding sample qubit about rotation axis.The experimental results of a benchmark of IRIS clustering show that our approach is superior to the traditional clustering methods such as common self-organizing networks, K-mean clustering, and the adjacent clustering.

The Spherical Description of Qubit
In the quantum computing, a qubit is a two-level quantum system which could be described in two-dimension complex Hilbert space.According to principle of superposition, a qubit can be defined as i cos 0 e sin 1 2 2 where 0 π is called the Dirac notation, and we will see it often in the following paragraphs, as it is the standard notation for states in quantum mechanics.Therefore, unlike the classical bit, which can only equal 0 or 1, the qubit resides in a vector space parameterized by the continuous variables θ and φ .The normalization condition means that the qubit's state can be represented by a point on a sphere of unit radius, called the Bloch sphere.The Bloch sphere representation is useful as it provides a geometric picture of the qubit and of the transformations that can be applied to its state.This sphere can be embedded in a three-dimensional space of Cartesian coordinates ( cos sin ).Thus, the state ϕ can be written as ( ) By definition, a Bloch vector is a vector whose components ( ) , , x y z represent a point on the Bloch sphere.We can say that the angles θ and φ define a Bloch vector, as shown in Figure 1.

The Rotation of Qubit about an Aaxi on the Bloch Sphere
In this work, we adjust the weights of competition layer by rotating them around an axis towards the target qubit on the Bloch sphere.This rotation can simultaneously change two parameters θ and φ of a qubit and can au- tomatically achieve the best matching out of two adjustments, which can better simulate the quantum behaviour.To achieve this rotation, it is crucial to determine the rotation axis, as it can directly impact the convergence speed and efficiency of algorithm.To determine the rotation axis, we propose the following method.
Let , , denote two points on the Bloch sphere.The rotation axis for rotating the qubit from W to X can be written as tensor product of W and X, and the relation of these three vectors is shown in Figure 2.
Let the Bloch coordinates of W and X be , , . According to the above method, the axis of rotating W to X can be written as From the principles of quantum computing, on the Bloch sphere a rotation through an angle δ about the axis directed along the unit vector , , is given by the matrix ( ) ( ) where I denotes an unit matrix, , , Hence, on the Bloch sphere, a rotation through an angle δ about the axis axis R that rotates the current qubit W towards the target qubit X can be written as and the rotation operation can be written as

The Measurement of Qubits
From the principles of quantum computing, the coordinates x, y, and z of a qubit on the Bloch sphere can be measured by the Pauli operators using the following equations.

Quantum Self-Organization Neural Networks Model
We propose the quantum self-organization neural networks model based on the Bloch spherical rotation are shown in Figure 3, where both inputs and weight values are qubits described on the Bloch sphere.
denote the inputs, and , respectively.From the spherical geometry, the shortest distance between two points on a sphere is defined as the length of the minor arc on the big circle defined by these two points and the centre of Bloch sphere.As a result of the Bloch sphere radius equal   (11) Hence, the distance between X and j W , namely, the output of the jth node in competition layer may be given by , , , , , denote the lth sample.We adopt the following normalization method.

MIN MAX MIN
where MAX and MIN respectively denote the maximum and the minimum of all samples.Let sample after normalization be T 1 2 , , , , we use convert l X to the phase of qubits by the following equation , , , π , π , ,π At this point, l X  may be converted to qubits on the Bloch sphere, as shown in the following equation., , , For the lth sample T 1 2 , , , according to the Equations ( 11)-( 12), the spherical distance between l X and j W can be written as ( ) Suppose that the competition layer has C nodes, and that the node with a minimum distance is defined as the winning one.Hence, the winning node should satisfy the following equation

Network Clustering Algorithm
The self-organizing network is a typical unsupervised clustering model; it is suitable for solving the problem of don't know the class number of clustering beforehand.Its training is completely different from the traditional BP neural networks.If a self-organizing network must apply supervised information in clustering, it is powerless for clustering problems with no supervision information available.The training process of our model does not contain any prior knowledge about samples classification results; otherwise will lose its generalization ability, which is the shortcoming of Ref. [12].Our approach can be summarized as follows.
Step 1 Quantum state description of the sample.Convert the samples to qubit states by Equations ( 13)-( 16).Measure the quantum samples by Equations ( 8)-( 10) to obtain their Bloch coordinates.
Step 2 The weights of networks initialization.Initialize all the networks weights to randomly distribution of qubits on the Bloch sphere, as shown below., C denotes the number of competition nodes, and rand denote the random number in (0, 1).
Step 3 The parameters of networks initialization.Include: the maximum iterative steps G , the initial value of learning rate 0 β , the finial value of learning rate f β , the initial value of neighbourhood radius 0 r , the finial value of neighbourhood radius f r , the initial value of variance 0 σ , the finial value of variance f σ .Set the current iterative step t to 0.
Step 4 Compute the current learning rate, neighborhood radius, and variance by the following equations.
( ) ( ) Step 5 Measure all quantum weights in competition layer by Equations ( 8)-( 10) to obtain their Bloch coordinates.For the lth sample where ( ) , d j j * denote the spherical distance between the jth node and the j * th node.According to theorem, the rotation axis and rotation matrix of rotating Then, the rotation operation can be written as where > , save clustering results and stop, else set t = t + 1, and go back to Step 4.

Simulations
In order to experimentally illustrate the effectiveness of the proposed BQSON, the IRIS samples are used to compare it with the Classical Self-Organization Network (CSON ), the K-mean clustering, the Nearest Neighbor Clustering (NNC) in this section.In these experiments, we perform and evaluate the BQSON in Matlab (Version 7.1.0.246) on a Windows PC with 2.19 GHz CPU and 1.00 GB RAM.To enhance the impartiality of the comparison results, our BQSON has the same structure and parameters as the CSON in this experiment.The IRIS data set contains 150 four dimensional samples.The sample is divided into three classes, and each class contains 50 samples, such as setosa (1 -50), versicolor (51 -100), virginica (101 -150).

Parameter Settings
Both BQSON and CSON have 4 input nodes and 100 competition nodes arranged in square matrix.Other parameters are set as follows: . If the clustering results do not change in 100 consecutive steps, we call algorithm reach convergence.For K-mean clustering, the K is set to 3, and if each of variations of class centers is less than 10 10 − in two consecu- tive generations, the algorithm terminates.For NNC, the clustering threshold is set to 2.0 λ = .If the distance of the sample X from the center of the kth class is less than λ , the sample X is considered to belong to the kth class.

Clustering Result Contrasts
Considering the log likelihood function is more used in evaluation of the performance of the Bayesian classification network, and less used in clustering algorithm, therefore, we don't use this index in our work.To facilitate comparison, two relevant concepts are defined as follows: Precision Ratio Let the correct number of samples in the kth class after clustering be NPR, and the total number of samples in the kth class after clustering be NA.Precision Ratio is defined as follows Recall Ratio Let the correct number of samples in the kth class after clustering be NPR, and the total number of samples in the kth class before clustering be NB, Recall Ratio is defined as follows After 9797 iterative steps, the BQSON reaches convergence.All samples are divided into three classes, and each class contains 50 samples.The first class contains 50 "setosa" samples.The second class contains 48 "versicolor" samples and 2 "virginica" samples.The third class contains 48 "virginica" samples and 2 "versicolor" samples.The Precision Ratio and Recall Ratio of three class samples reach 100%, 96%, 96%, respectively.
After 10,000 iterative steps, the CSON does not reach convergence, where the first class contains 50 "setosa" samples, and for the rest of the 100 samples, the model is powerless.In addition, we running until 30000 iterative steps, the CSON is still not convergence.
For K-mean clustering, after 11 iterative steps, convergence is reached.The first class contains 50 "setosa" samples, the second class contains 61 samples where 47 samples are correct, and the third class contains 39 samples where 36 samples are correct.The Precision Ratio of three class samples reach 100%, 77.05%, 92.31%, respectively, and the Recall Ratio of three class samples reach 100%, 94%, 72%, respectively.
For NNC, All samples are divided into three classes.The first class contains 50 "setosa" samples, the second class contains 62 samples where 50 samples are correct, and the third class contains 38 samples where all 38 samples are correct.The Precision Ratio of three class samples reach 100%, 80.65%, 100%, respectively, and the Recall Ratio of three class samples reach 100%, 100%, 76%, respectively.

Clustering Results Analysis
From the experimental results, it is clear that both Precision Ratio and Recall Ratio of BQSON are the highest in four algorithms.These results show that the BQSON is obviously superior not only to the CSON but to the K-mean and the NNC as well.
Next, we theoretically explain the above experimental results.First, BQSON adopted a new way to calculate the distance of nodes between input layer and competition layer.In the existing clustering algorithms, the dis-tance measurement is generally taken the Euclidean distance, which this distance is calculated based on coordinates.In BQSON, however, the distance is obtained by calculating the Bloch spherical distance of each dimension between input samples and competition nodes.Let ( ) ij y t denote the jth output corresponding to the ith input sample, where t denotes the current iterative step.Let ( ) ( ) where C denotes the number of nodes in competition layer, and L denotes the total number of samples.
For the normalized samples, in CSON, the difference of each dimension between sample and weight ij d belongs to [0, 1].In BQSON, the difference of each dimension [ ] 0, π ∈ belongs to [ ] 0, π by applying the Bloch spherical distance.Hence, in order to make fair, we compared the average variance of BQSON after dividing by 2  π with that of CSON.The contrast results are shown in Figure 4.The Figure 4 shows that the average variance of BQSON is obviously greater than that of CSON, which suggests that the spherical distance have better distinguish ability than Euclidean distance for intensive samples.The "setosa" samples are relatively independent, which lead four algorithms to obtain the ideal clustering results.Both "versicolor" and "virginica" samples present overlapping intensive distribution, where the BQSON have also obtained the ideal clustering results.However, the clustering effect is not ideal for both K-mean and NNC based on the Euclidean distance, the CSON is completely unable to separate these two classes of samples.
Secondly, the BQSON adopted a new way of weight adjustment.In CSON, the vector differences between samples and weights are directly used to adjust the weighs, which is strongly influenced by learning rate, not easy to achieve fine adjustment.In BQSON, however, the weighs are adjusted by rotating them to a sample so as to approximate this sample.Due to the rotation is performed on the Bloch sphere, so, it may conduct a subtle adjustment of weights, which enhance the clustering ability of BQSON.

Conclusion
In this work, a quantum self-organization network clustering algorithm is proposed.In our approach, the weights of nodes in competition layer are updated by rotating qubits on the Bloch sphere.The comparative experiments of IRIS show that the clustering ability of proposed approach is significantly higher than the classic self-organizing network.The Precision Ratio and Recall Ratio of BQSON increased by 7.5467% and 8.6667% more than those of K-mean and increased by 3.7833% and 5.3333% more than those of NNC.In addition, the BQSON is inefficient.It is also worth pointing out that, BQSON increases computing operations, such as the axis of rotation, rotation matrix and projection measurement, which lead to the increasing amount of calculation, prolonging running time, and reducing efficiency of clustering.However, the increase of these operations greatly improves the clustering ability of BQSON.In other words, BQSON is at the cost of computing efficiency for enhancing clustering ability, which is consistent with no free lunch theorem.Hence, how to enhance the computing efficiency of BQSON is the subject of further research.

Figure 1 .Figure 2 .
Figure 1.Qubit description on the Bloch sphere.z values of the jth node in competition layer.By the projection measuring, the Bloch coordinates of i

Figure 3 .
Figure 3.The quantum self-organization neural networks model.

1 .
Quantum State Description of the SamplesFirst, all samples data are converted to [0, 1].Let

W
denote the weight value of the jth node in the competition layer, as follows T 1 2

6
For the lth sample l X , in the competitive layer node array, select the neighborhood li x .The rotation angles are computed by the following equation

Figure 4 .
Figure 4.The average variance contrasts of BQSON and CSON.
to 1, the spherical distance between