Finding the Asymptotically Optimal Baire Distance for Multi-Channel Data

A novel permutation-dependent Baire distance is introduced for multi-channel data. The optimal permutation is given by minimizing the sum of these pairwise distances. It is shown that for most practical cases the minimum is attained by a new gradient descent algorithm introduced in this article. It is of biquadratic time complexity: Both quadratic in number of channels and in size of data. The optimal permutation allows us to introduce a novel Baire-distance kernel Support Vector Machine (SVM). Applied to benchmark hyperspectral remote sensing data, this new SVM produces results which are comparable with the classical linear SVM, but with higher kernel target alignment.


Introduction
The Baire distance was introduced to classification in order to produce clusters by grouping data in "bins" by [1].In this way, they seek to find inherent hierarchical structure in data defined by their features.Now, if there are many different features associated with data, then it is reasonable to sort the feature vector by some criterion which ranks their contribution to this inherent hierarchical structure.We will see that there is a natural Baire distance associated to any given permutation of features.Hence, it is natural to ask for this task to be performed in reasonable time.In general, there is no efficient way of sorting n variables, but if the task is to find a permutation satisfying some optimality condition, then often a gradient descent algorithm can be applied.In that case, the run-time complexity is decreased considerably.
In this paper we introduce a permutation-dependent Baire distance for data with n features, and we define a linear cost function depending on the pairwise Baire distances for all possible permutations.The Baire distance we use depends on a parapmeter  , and we argue that the precise value of this parameter is seldom to be ex- pected of interest.On the contrary, we believe that it practically makes more sense to vary this parameter and to study the limiting case 0 →  .Our theoretical result is that there is a gradient-descent algorithm which can find the asymptotic minimum for 0 →  with a runtime complexity of ( ) O dn , where d is the number of all data pairs.The Support Vector Machine (SVM) is a well known technique for kernel based classification.In kernel based classification, the similarity between input data is modelled by kernel functions.These functions are employed to produce kernel matrices.Kernel matrices can be seen as similarity matrices of the input data in reproducing kernel Hilbert spaces.Via optimization of a Lagrangian minimization problem, a subset of input points is found, which is used to produce a separating hyperplane for the data of various classes.The final decision function is dependent only on the position of these data in the feature space and does not require estimation of first or second order statistics on the data.The user has a lot of freedom on how to produce the kernel functions.This offers the option of producing individual kernel functions for the data.
As an application of our theoretical result, we introduce the new class of Baire-distance kernels which are functions of our parametrized Baire distance.For the asymptotically optimal permutation, the resulting Baire distance SVM yields results comparable with the classical linear SVM on the AVIS Indian Pine dataset.The latter is a well known hyperspectral remote sensing dataset.Furthermore, the kernel target alignment [2] represents an a priori quality assessment and favours our new Baire distance multi-kernel SVM constructed from Baire distance kernels at difference feature resolutions.This new multi-kernel combines in a sense our first approach with the approach of [1], as it combines the different resolutions defined by their method of "bin" grouping.As our preliminary practical result, we obtain greater completeness in many of our clusters than with the classical linear SVM clusters.

Ultrametric Distances for Multi-Channel Data
After a short review on the ultrametric parametrized Baire distance, it is shown how to find for n variables their asymptotically optimal permutation for a linear cost function defined by permutation-dependent Baire distances.It has quadratic run-time complexity, if the data size is fixed.

Let ,
x y be words over an Alphabet A .Then the Baire distance is Later on, we will study the limiting case 0 →  .Remark 2.2.The metrics d  are all equivalent in the sense that they generate the same topologies.The Baire distance is important for classification, because it is an ultrametric.In particular, the strict triangle inequality Data representation is often related to some choice of alphabet.For instance, the distinction "Low" and "High" leads to and is used in [4].The decimal representation of numbers yields { } 0,1, , 9 A =  for the method in [1].A very general encoding with arithmetic flavour is given by subsets ⊆ inside the ring of integers inside a p -adic number field K , with all a A ∈ different modulo [5].No knowledge of p -adic number theory is required for what comes after the following Example 2.3.However, the interested reader may consult [6] for a first application of such mathematics in classification.
Example 2.3.The simplest example of p -adic number fields K in data representation is given by taking K as the field of 2-adic numbers .Then 2 K O =  is the ring of 2-adic integers, and as alphabet { } 0,1 A = .The numbers 0.1 represent the finite field 2  in a standard way which is often used when 2-adic numbers are written out as power series in 2, i.e. as finite or infinite binary numbers.
The role of the parameter  in classification can be described as follows.Let X A ⊆ be a set of words.
Then X defines a unique dendrogram   are tree-isomorphic for all  .However, optimal classification results in general do depend on  , as has been observed in Theorem 2 of [7], where the result is formulated for p -adic ultrametrics.

Optimal Baire Distance
Given data X and attributes ( ) : i.e. a word with letters from the alphabet V .This yields the Baire distance ( ) In order to determine a suitable permutation for the data, consider the average Baire distance.A high average Baire distance will arise if there is a large number of singletons, and branching is high up in the hierarchy.On the other hand, if there are lots of common initial features, then the average Baire distance will be low.In that case, clusters tend to have a high density, and there are few singletons.From these considerations, it follows that the task is to find a permutation is minimal, leading to the optimal Baire distance d σ  .Any method attempting to fulfil this task must overcome the problem that ! where is the number of data pairs ( ) x y with identical values exclusively in the set ( ) The inner sum is taken over the set where , and ( ) ( ) is the length of the common initial subword with the standard word 1 n w w  obtained by defining an ordering on any arbitrary alphabet Some first properties of k ν Σ are listed in the following: These properties follow from Equation ( 2) above, and they imply some first properties of σ ν α : An important observation is that σ ν α depends only on the first  .This will be exploited in the following section, where it is shown how optimal permutations σ can be computed.
The following two examples list all values of ( ) in the case id σ = for 3, 4 n = . By effecting the permutation σ , one obtains the corresponding matrices ( ) ,k σ ν α , and summing over the row labelled ν yields σ ν α .

Finding Optimal Permutations
Let ∆ be the simplex of n channels labelled by the set . The faces are given by subsets of N or, equivalently, by elements of the power set ( ) The function 1) is to be minimised, where is a permutation of the set N .A combinatorialtopological point of view appears to be helpful in the task.Namely, view the simplex ∆ as a (combinatorial) simplicial complex.A star of an i -face x ∈ ∆ is the set of ν -faces attached to x with i ν ≥ (including x itself).The weak topology on ∆ is generated by the stars.
To ∆ is associated a graph ∆ Γ whose vertices are the faces, and an edge is given by a pair ( ) , v v′ consisting of an i -face v and an 1 i + -face v′ such that v is a face of v′ .The counts , and this in turn yields weights on ∆ Γ in the following way: Observe that all edge weights are non-negative: . The graph ∆ Γ is a directed acyclic graph with origin vertex v ∅ and terminal vertex N v .An injective path : where γ is given by the sequence of edges ( ) , , e e ν −  .Definition 2.6.A permutation n S σ ∈ is said to be compatible with an injective path : where γ is given by the sequence of sets 0 , , where the path γ is given as in Definition 2.6.
= be an edge on γ given by the pair of sets from which the assertion follows for id σ = by summation over the edges along γ .For arbitrary σ com- patible with γ the proof is analogue to this case.

( )
E σ  can be found by travelling along a shortest path from v ∅ to E v .One method for finding such shortest paths is given by the well known Dijkstra algorithm.Corollary 2.9.Dijkstra's shortest path algorithm on ∆ Γ finds the global minima for ( ) The main problem with applying Corollary 2.9 is the size of ∆ Γ for large n .However, we believe that it is of practical interest to consider ( ) E σ  for sufficiently small  .We will show below that in this case, the following gradient descent finds the global minimum in an exhaustive manner.Given an edge Step Proof.We may assume that there exists some as otherwise : C τ =  can be chosen.Assume now further that ν be minimal with property (10).Still further, we may assume that there exists some as otherwise σ could not be derived by gradient descent.The reason is that at step ν that method would descend down to ( ) τ ν instead of to ( ) σ ν , since ν is the first occurrence of property (10).Let now µ be minimal with (11).All this implies that ( ) ( ) ( ) is a polynomial with real coefficients such that ( ) Hence, by continuity of ( ) , there exists a small neighbourhood of 0 on which ( ) P t τ is still positive.This neighbourhood defines the desired constant C τ .
An immediate consequence of the lemma is that gradient descent is asymptotically the method of choice: Theorem 2.12.There exists a constant ( ) such that gradient descent on Γ finds a global minimum for the cost function ( ) has the desired property.
The competitiveness of the gradient descent method is manifest in the following Remarks: Remark 2.13.Algorithm 2.10 is of run-time complexity at most ( ) Proof.In the first step, there are n choices for possible edges to follow, and after n steps the possible permutations are found.Finding the minimal edge in step ν can be done with complexity ( ) O ν .This proves the upper bound.Notice that the efficiency holds only for the case that the weights w of ∆ Γ are already given.However, this cannot be expected in general.Therefore, we investigate here the computational cost for ( ) w γ for a gra- dient descent path γ in ∆ Γ .The following is immediate: : 2 , , : on which the channels in I coincide.A trivial, but observation is ( ) ( ) as this allows to define a nice way of computing the weight ( )  , , :  , ,  ,  , , : , , is immediate.Its usefulness is that the right hand side is computed more quickly than the left hand side: Lemma 2.16 The cost of ( ) Step 1 ν > .Repeat Step 1 with current values of Z and N , if both sets are non-empty.Set with current value of i .
Output.Path : ⋅ .Notice that the constant C of Theorem 2.12 can be very close to zero.That would mean that the gradient descent method yields only a local minimum for most values of  .However, we believe that there is no poly- nomial-time algorithm which finds a minimum is global for  , or at least for all  below a pre- described threshold.

Combining Ultrametrics with SVM
Within this section the potential of integrating ultrametrics into state-of-the art classifiers-the Support Vector Machine (SVM) as introduced by [8]-is presented.SVM has been intensely applied for classification tasks in remote sensing and several methodological comparisons have been established in previous work of the authors [9] [10].At first, our methodology is outlined.Secondly, a classification result for a standard benchmark from hyperspectral remote sensing is shown.

Methodology
Kernel matrices are the representation of similarity between the input data used for SVM classification.To integrate ultrametrics into SVM classification the crucial step is therefore to create a new kernel function [11] [12].Instead of representing the Euclidean distance between input data, this new kernel function represents the Baire distance between them.To have an optimal kernel based on the Baire distance, at first an optimal permutation σ is found as outlined in Section 2.3 by using Algorithm 2.17.The new kernel is thus given as where d d σ σ =  for some choice of ( ) sufficiently small, and we call it a Baire distance kernel.This new kernel function could be used for classification directly.However, one feature of kernel based classification is that multiple kernel functions can be combined to increase classification performance [13].The Baire distance is dependent on the resolution (bitrate) of the data.Two very similar features will maintain a large σ  -value at high bit depths, while the value of σ  of less similar features will deteriorate at higher bit-rates.Thus, by varying the bit depth of the data, one obtains additional information about the similarity of the data.Therefore, a kernel is to be created which incorporates the information about similarity at each resolution.At first, data with 8-bit depth are used.An optimal 8 σ is computed as described in Section 2.3.Afterwards, a kernel 8 K σ is computed, which includes the Baire distance between features for the given σ at 8 bit.In the next step, data are compressed to 7-bit depth.Again, an optimal 7 σ is found, a new kernel is computed and the kernels are summed up.For bit depths

{ }
1, ,8 b ∈  kernels are computed and summed to the multiple Kernel mult K .
( ) This multiple kernel also belongs to the new class of Baire distance kernels and has the advantage of incorporating the similarity at different bit depths.It is compared against the standard linear kernel frequently used for SVM: where the bracket , ⋅ ⋅ denotes the standard scalar product on the Euclidean space into which the data is mapped.

Application
Within this section, a comparison on a standard benchmark dataset from hyperspectral remote sensing is presented, cf. also [14].The AVIRIS Indian Pines dataset consists of a 145 145 × pixel hyperspectral image with 220 spectral channels (Figure 2).It is well known due to the complexity of the classification problem it represents.The 16 land use classes consisting mainly of crop classes are to be separated.These are difficult to separate since they are spectrally very similar (due to the early phenological stage of the vegetation).Although our implementation Algorithm 2.17 is to process 220 features, only the first six principal components are considered.The reason is that there are two sources of coincidences.The first is coincidence due to spectral similarity of land cover classes (signal), the second is coincidence due to noise.For this work, only the coincidence of signal is relevant.Since the algorithm is not fit to distinguish between the two sources, only the six first principal components are considered relevant.They explain 99.66% of the sum of eigenvalues and are therefore believed to contribute considerably to coincidences due to signal and only marginally to coincidence due to noise.
At first, the dataset is classified with a linear kernel SVM as given in Equation ( 18).A visual result can be seen in Figure 3 (left).The overall accuracy yielded is 53.5% and the κ -coefficient is 0.44.As can be seen, the dataset requires more complex kernel functions than linear ones.Then, a multiple kernel mult K of the form (16) is computed as described in Section 3.1.The dataset is again classified using an SVM, and a visual result can be seen in Figure 3 (right).The overall accuracy yielded is 53.7% and the κ -coefficient is 0.45.The overall accuracy is the percentage of correctly classified pixels from the reference data.The κ -co- efficient is a statistical measure of the agreement, beyond chance, between the algorithm's results and the manual labelling in the reference data.Both are global measurements of performance.
As can be seen, both results have a lot of resemblance in the major part.However, the result produced with the linear kernel tends to confuse the brown crop classes in the north with green pasture classes.On the other hand, the linear kernel SVM better recognizes the street in the Western part of the image.
The kernel target alignment between these kernels and the ideal kernel was computed.The ideal kernel is defined via the label L associated to each pixel, and has value 1 if the labels coincide, otherwise its value is zero.Note that the kernel target alignment proposed by [2] represents an a-priori quality assessment of a kernel's suitability.It is defined as , , , denotes the usual scalar product between Gram matrices.

The kernel target alignment takes values in the interval [ ]
1, 1 − + with one being the best.The kernel target alignment of lin K was 0.37.The mult K yielded a higher alignment of 0.47 thus giving reason for expecting a higher overall performance of the latter.The producers' accuracies ( ) pa and users' accuracies ( ) ua for the individual classes are shown in Table 1 and Table 2.
The users' accuracy shows what percentage of a particular ground class was correctly classified.The producers' accuracy is a measure of the reliability of an output map generated from a classification scheme which tells what percentage of a class truly corresponds to a class in the reference.Both are local (i.e.class-dependent) measurements of performance.

Figure 1 .
Figure 1.Two words with common initial subword.
d z y ≤ holds true.This is shown to lead to efficient hierarchical classification with good classification results [1] [3] [4].
and d  defines a metric dendrogram the metric d  .By equivalence of the Baire metrics, dendrograms ( )D X

Step 1 . Collect in 1 E
o e will denote the origin vertex v , and ( ) t e means the terminal vertex w .all edges e with ( ) 0

∅
which seems at first sight exponential in the dimension of ( )e ∆.In particular, the weights of the n very first edges ( ) look to be very cumbersome to compute.The problem is the function : w e : Lemma 2.15.Let 2 N I ∈ be a vertex.Then for any edge e with origin ( ) o e I = and terminus ( ) t e J = it holds true that ( ) ( ) ( ) w e c I c J = −Proof.This is an immediate consequence of the identity

Theorem 2 .=
18 Algorithm 2.17 has run-time complexity at most The complexity in Step ν is at most with the Z ν being the Z at that step.The reason is that, according to (15) and Lemma 2.16, ( ) w e has complexity ( ) O Z ν , and there are n ν − edges going out of vertex v ν .Bounding cardinalities of Z ν by d from above, and summing the costs yields the desired bound
The subgraph of ∆ Γ containing all paths with smallest sum of edge weights from v ∅ to N v .This algorithm clearly terminates after n steps.The paths :