Data Perturbation Analysis of the Support Vector Classifier Dual Model

The paper establishes a theorem of data perturbation analysis for the support vector classifier dual problem, from which the data perturbation analysis of the corresponding primary problem may be performed through standard re-sults. This theorem derives the partial derivatives of the optimal solution and its corresponding optimal decision function with respect to data parameters, and provides the basis of quantitative analysis of the influence of data errors on the optimal solution and its corresponding optimal decision function. The theorem provides the foundation for analyzing the stability and sensitivity of the support vector classifier.


Introduction
Many methods of data mining exist, including machine learning which is a major research direction of artificial intelligence.The theory of statistics plays a fundamental role in machine learning.However the classic theory of statistics is often based on large sample properties, while in reality we often face small samples, sometimes with a very limited number of observations due to resource or other constraints.Consequently the performance of some large sample methods may be unsatisfactory in certain real applications.Vapnik and collaborators pioneered in the 1960s machine learning for finite samples and developed the statistical learning theory [1] [2].To deal with the inconsistency between minimizing the empirical risk and minimizing the expected risk in statistical learning, they proposed the principle of structural risk minimization to investigate the ( ) [5].Because it is not guaranteed that a linear space over the original input space n R can be found to separate the training input set, a transformation ( ) x is often introduced from the input space n R to a high-dimensional Hilbert space  such that the training input set corresponds to a training set in  .Then a super-plane is sought after in  to separate the input space to solve the data mining classification problems.
The primary problem of the standard support vector classifier (SVC) is to , , where T stands for transpose, 0, 1, , ( ) The quadratic Wolfe dual problem of the above primary problem (1) is to where ( ) The relationships between the optimal solution * , 1, ,  The corresponding optimal decision function is given by ( ) impact on the optimal solution and the corresponding optimal decision function of the SVC model.When the upper bound of data errors is known, the method of data perturbation analysis is used to derive the upper bounds of the optimal solution and its corresponding optimal decision function.
We are interested in the stability and sensitivity analysis of the optimal solution and its corresponding optimal decision function.Our analysis is based on the Fiacco's Theorem on the sensitivity analysis of a general class of nonlinear programming problem [6] [7] [8].The second order sufficiency condition required for the Fiacco Theorem was further studied in [9].On the other hand, kernel functions have been used in machine learning [10] [11] [12].In this paper, we establish a Theorem on data perturbation analysis for a general class of SVC problems with kernel functions.The paper is organized as follows.The main Theorem and its Lemmas are introduced in the next section.Section 3 concludes the paper.

Data Perturbation Analysis of the SVC Dual Problem
Suppose that ( ) is the optimal solution of the primary problem (1).Corresponding to the solution ( ) w b ψ , we divide, for convenience, the training data ( ) ( ) ( ) x y x y x y  into categories , , A B C as follows: 1) A category: points satisfying ( ) , , , , , , 2) B category: points satisfying ( ) 3) C category: points satisfying ( ) is the optimal solution of the primary problem (1).Then the set of all working constraint indices is give by

I w b t m m t m t m s s m
A m i i A m j j B C Proof.For i A ∈ , suppose there exists For i C ∈ , we always have The set of gradients ( ) Since it is assumed that there is a support vector multiplier * i α such that * 0 i i C α < < , the number of gradients in ( ) smaller than the dimension m, which is the dimension of the vector Therefore the vector whose m components are either −1 or +1 cannot be expressed as a linear combination of the set of gradients ( ) . Hence the set of constrained gradients is linearly Suppose that A category input vectors corresponding to points in category A is linearly independent.Then the following results hold.
1) The optimal solution ( ) is the gradient of T y α with respect to p, and L is the Lagrange function of the primary problem (1), ( ) Our results follow directly from the Fiacco Theorem after checking the following three conditions required in the Fiacco Theorem: The Second Order Sufficiency Condition for * α , Linear Independence of Gradients over the Working Constraint Set, The Strict Complementarity Property.

Conclusion
Support vector classifier plays an important role in machine learning and data mining.Due to the standard results that connect the primary problem and its dual problem, analysis of the primary problem can be achieved by working on its dual problem.Our main result establishes the equation for solving the partial derivatives of the optimal solution and the corresponding optimal decision function with respect to data parameters.Because the derivative measures the rate of change and a large value of the derivative often implies a large rate of change and hence a sensitive and unstable solution, our main result provides the foun- essence of the SVC problem for data mining is to seek a real-valued function ( ) f x on the training input set n R such that the deci- sion function ( ) sgn f x infers the corresponding category in the set { } parameters, w consists of the slopes of the super-plane, b R ∈ is the intercept of the super-plane, and

x
Journal of Software Engineering and Applications indices.Let's start with a lemma.Lemma 1. Suppose that ( ) Software Engineering and ApplicationsProof.The proof is straightforward from the definitions of working constraint index and the observation that points in A and B categories imply that 0 i ψ =and points in the C category imply that 0i ψ > .Then we offer a sufficient condition for the linear independence of the gradients indexed by the set ( ) solution of the dual problem(2).If there exists any component * j We use the lower case letter p to denote the training input variable and 0 p to denote the training input data.When the form of the kernel function ( ) x φ is known, the following main Theorem investigates how the errors in the training input data impact on the optimal solution and the corresponding optimal decision function of the SVC model.Theorem 3. Suppose that ( ) is the optimal solution of the dual problem (2) when 0 p p = , and the corresponding Lagrange multiplier is * solution to the dual problem(2) for any Proof of the Second Order Sufficiency Condition.Suppose that solution to the dual problem.Then there exist multipliers the Second Order Suffi- ciency Condition is satisfied by * α .Proof of the Linear Independence of Gradients over the Working Constraint Set.This is Lemma 2.Proof of the Strict Complementarity Property.We need to show that * g and * α cannot be 0 simultaneously, and * ψ and * C α − cannot be 0 simul- taneously.For i A ∈ , we have * 0 i α > and hence the intersection between A and the working constraint set is empty because of the assumption * ,