Data Perturbation Analysis of the Support Vector Classifier Dual Model

The paper establishes a theorem of data perturbation analysis for the support vector classifier dual problem, from which the data perturbation analysis of the corresponding primary problem may be performed through standard results. This theorem derives the partial derivatives of the optimal solution and its corresponding optimal decision function with respect to data parameters, and provides the basis of quantitative analysis of the influence of data errors on the optimal solution and its corresponding optimal decision function. The theorem provides the foundation for analyzing the stability and sensitivity of the support vector classifier.

Keywords

Share and Cite:

Cai, C. and Wang, X. (2018) Data Perturbation Analysis of the Support Vector Classifier Dual Model. Journal of Software Engineering and Applications, 11, 459-466. doi: 10.4236/jsea.2018.1110027.

1. Introduction

Many methods of data mining exist, including machine learning which is a major research direction of artificial intelligence. The theory of statistics plays a fundamental role in machine learning. However the classic theory of statistics is often based on large sample properties, while in reality we often face small samples, sometimes with a very limited number of observations due to resource or other constraints. Consequently the performance of some large sample methods may be unsatisfactory in certain real applications. Vapnik and collaborators pioneered in the 1960s machine learning for finite samples and developed the statistical learning theory   . To deal with the inconsistency between minimizing the empirical risk and minimizing the expected risk in statistical learning, they proposed the principle of structural risk minimization to investigate the consistency in the machine learning process. Later on, Vapnik and his colleagues at AT & T Bell Laboratory proposed the method of support vector machine  . This has then been further developed into the method of support vector classifier (SVC) within statistical learning theory, which has shown satisfactory performance and is becoming a general method of machine learning. We focus our paper on SVC.

Let the training data set be $T=\left\{\left({x}_{1},{y}_{1}\right),\left({x}_{2},{y}_{2}\right),\cdots ,\left({x}_{m},{y}_{m}\right)\right\}\in {\left(X×Y\right)}^{m}$ , where ${x}_{i}\in X={R}^{n}$ is the training input and ${y}_{i}\in Y=\left\{-1,+1\right\}$ is the training output, $i=1,2,\cdots ,m$. The essence of the SVC problem for data mining is to seek a real-valued function $f\left(x\right)$ on the training input set ${R}^{n}$ such that the decision function $sgn\text{ }f\left(x\right)$ infers the corresponding category in the set $\left\{-1,+1\right\}$ of an arbitrary data input $x\in X={R}^{n}$ , where $sgn\text{ }f\left(x\right)=-1$ if $f\left(x\right)<0$ and $sgn\text{ }f\left(x\right)=1$ if $f\left(x\right)>0$   . Because it is not guaranteed that a linear space over the original input space ${R}^{n}$ can be found to separate the training input set, a transformation $\varphi :x\to x=\varphi \left(x\right)$ is often introduced from the input space ${R}^{n}$ to a high-dimensional Hilbert space $\mathcal{H}$ such that the training input set corresponds to a training set in $\mathcal{H}$. Then a super-plane is sought after in $\mathcal{H}$ to separate the input space to solve the data mining classification problems.

The primary problem of the standard support vector classifier (SVC) is to

$\begin{array}{l}\underset{w\in \mathcal{H},b\in R,\psi \in {R}^{m}}{\mathrm{min}}\tau \left(w,b,\psi \right)=\frac{1}{2}{w}^{\text{T}}w+\underset{i=1}{\overset{m}{\sum }}\text{ }{C}_{i}{\psi }_{i}\\ \text{subjectto}\text{\hspace{0.17em}}{g}_{i}\left(w,b,\psi \right)=-{y}_{i}\left({\left({x}_{i}\right)}^{\text{T}}w+b\right)-{\psi }_{i}+1\le 0,i=1,2,\cdots ,m,\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{g}_{m+i}\left(w,b,\psi \right)=-{\psi }_{i}\le 0,i=1,2,\cdots ,m,\end{array}$ (1)

where T stands for transpose, ${C}_{i}>0,i=1,\cdots ,m$ and $\psi ={\left({\psi }_{1},\cdots ,{\psi }_{m}\right)}^{\text{T}}$ are penalty parameters, w consists of the slopes of the super-plane, $b\in R$ is the intercept of the super-plane, and ${x}_{i}=\varphi \left({x}_{i}\right)$.

The quadratic Wolfe dual problem of the above primary problem (1) is to

$\begin{array}{l}\underset{\alpha }{\mathrm{min}}W\left(\alpha \right)=\frac{1}{2}{\alpha }^{\text{T}}H\alpha -{e}^{\text{T}}\alpha \\ \text{subject}\text{\hspace{0.17em}}\text{to}\text{\hspace{0.17em}}\text{\hspace{0.17em}}h\left(\alpha \right)={\alpha }^{\text{T}}y=0,\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }{C}_{i}\ge {h}_{i}\left(\alpha \right)={\alpha }_{i}\ge 0,i=1,2,\cdots ,m,\end{array}$ (2)

where $\alpha ={\left({\alpha }_{1},\cdots ,{\alpha }_{m}\right)}^{\text{T}}$ , $y={\left({y}_{1},\cdots ,{y}_{m}\right)}^{\text{T}}$ , $e={\left(1,\cdots ,1\right)}^{\text{T}}$ , $K\left({x}_{i},{x}_{j}\right)={\left(\varphi \left({x}_{i}\right)\right)}^{\text{T}}\varphi \left({x}_{j}\right)$ is the kernel function, and ${H}_{ij}={y}_{i}{y}_{j}K\left({x}_{i},{x}_{j}\right)$.

The relationships between the optimal solution ${\alpha }_{i}^{*},i=1,\cdots ,m$ of the dual problem (2) and the optimal solution $\left({w}^{*},{b}^{*}\right)$ of the primary problem (1) is given by

$\begin{array}{l}{w}^{*}=\underset{i=1}{\overset{m}{\sum }}\text{ }{\alpha }_{i}^{*}{y}_{i}{x}_{i},\\ {b}^{*}={y}_{i}-{\left(\varphi \left({x}_{i}\right)\right)}^{\text{T}}{w}^{*}={y}_{i}-\underset{j=1}{\overset{m}{\sum }}\text{ }{\alpha }_{j}^{*}{y}_{j}K\left({x}_{j},{x}_{i}\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}\text{any}\text{\hspace{0.17em}}{\alpha }_{i}^{*}\in \left(0,{C}_{i}\right).\end{array}$

The corresponding optimal decision function is given by $sgn\text{ }f\left(x\right)$ , where $f\left(x\right)={\sum }_{i=1}^{m}\text{ }{\alpha }_{i}^{*}{y}_{i}K\left({x}_{i},x\right)+{b}^{*}$. Furthermore, each component ${\alpha }_{i}^{*}$ of the optimal solution ${\alpha }^{*}$ corresponds to a training input, and we call a training input ${x}_{i}$ a support vector if its corresponding ${\alpha }_{i}^{*}>0$.

In data mining, the training input data ${x}_{i},i=1,2,\cdots ,m$ of the support vector classifier (SVC) model are approximation of the true values. When using the approximate values to establish the SVC model, errors in data will inevitably impact on the optimal solution and the corresponding optimal decision function of the SVC model. When the upper bound of data errors is known, the method of data perturbation analysis is used to derive the upper bounds of the optimal solution and its corresponding optimal decision function.

We are interested in the stability and sensitivity analysis of the optimal solution and its corresponding optimal decision function. Our analysis is based on the Fiacco’s Theorem on the sensitivity analysis of a general class of nonlinear programming problem    . The second order sufficiency condition required for the Fiacco Theorem was further studied in  . On the other hand, kernel functions have been used in machine learning    . In this paper, we establish a Theorem on data perturbation analysis for a general class of SVC problems with kernel functions. The paper is organized as follows. The main Theorem and its Lemmas are introduced in the next section. Section 3 concludes the paper.

2. Data Perturbation Analysis of the SVC Dual Problem

Suppose that $\left({w}^{*},{b}^{*},{\psi }^{*}\right)$ is the optimal solution of the primary problem (1). Corresponding to the solution $\left({w}^{*},{b}^{*},{\psi }^{*}\right)$ , we divide, for convenience, the training data $\left({x}_{1},{y}_{1}\right),\left({x}_{2},{y}_{2}\right),\cdots ,\left({x}_{m},{y}_{m}\right)$ into categories $A,B,C$ as follows:

1) A category: points satisfying ${\left({x}_{i}\right)}^{\text{T}}{w}^{*}+{b}^{*}=1$ and ${y}_{i}=+1$ , or ${\left({x}_{i}\right)}^{\text{T}}{w}^{*}+{b}^{*}=-1$ and ${y}_{i}=-1$. These points are denoted as $\left({x}_{1},{y}_{1}\right),\left({x}_{2},{y}_{2}\right),\cdots ,\left({x}_{t},{y}_{t}\right)$ for convenience. Denote $A=\left\{1,\cdots ,t\right\}$.

2) B category: points satisfying ${\left({x}_{i}\right)}^{\text{T}}{w}^{*}+{b}^{*}>1$ and ${y}_{i}=+1$ , or ${\left({x}_{i}\right)}^{\text{T}}{w}^{*}+{b}^{*}<-1$ and ${y}_{i}=-1$. These points are denoted as $\left({x}_{t+1},{y}_{t+1}\right),\left({x}_{t+2},{y}_{t+2}\right),\cdots ,\left({x}_{s},{y}_{s}\right)$ for convenience. Denote $B=\left\{t+1,\cdots ,s\right\}$.

3) C category: points satisfying ${\left({x}_{i}\right)}^{\text{T}}{w}^{*}+{b}^{*}<1$ and ${y}_{i}=+1$ , or ${\left({x}_{i}\right)}^{\text{T}}{w}^{*}+{b}^{*}>-1$ and ${y}_{i}=-1$. These points are denoted as $\left({x}_{s+1},{y}_{s+1}\right),\left({x}_{s+2},{y}_{s+2}\right),\cdots ,\left({x}_{m},{y}_{m}\right)$ for convenience. Denote $C=\left\{s+1,\cdots ,m\right\}$.

We call those indices i such that either ${g}_{i}\left(w,b,\psi \right)=0$ or ${g}_{m+i}\left(w,b,\psi \right)=0$ as working constraint indices. Let’s start with a lemma.

Lemma 1. Suppose that $\left({w}^{*},{b}^{*},{\psi }^{*}\right)$ is the optimal solution of the primary problem (1). Then the set of all working constraint indices is give by

$\begin{array}{c}I\left({w}^{*},{b}^{*},{\psi }^{*}\right)=\left\{1,\cdots ,t,m+1,\cdots ,m+t,m+t+1,\cdots ,m+s,s+1,\cdots ,m\right\}\\ =A\cup \left\{m+i:i\in A\right\}\cup \left\{m+j:j\in B\right\}\cup C.\end{array}$

Proof. The proof is straightforward from the definitions of working constraint index and the observation that points in A and B categories imply that ${\psi }_{i}=0$ and points in the C category imply that ${\psi }_{i}>0$. ,

Then we offer a sufficient condition for the linear independence of the gradients indexed by the set $I\left({w}^{*},{b}^{*},{\psi }^{*}\right)$.

Lemma 2. Let ${\alpha }^{*}={\left({\alpha }_{1}^{*},\cdots ,{\alpha }_{m}^{*}\right)}^{\text{T}}$ be the optimal solution of the dual problem (2). If there exists any component ${\alpha }_{j}^{*}$ of ${\alpha }^{*}$ such that $0<{\alpha }_{j}^{*}<{C}_{j}$ where

$j\in 1,2,\cdots ,m$ , then the set of gradients $\left\{{\left(\frac{\partial h}{\partial \alpha }\right)}^{\text{T}},{\left(\frac{\partial {h}_{i}}{\partial \alpha }\right)}^{\text{T}},i\in I\left({w}^{*},{b}^{*},{\psi }^{*}\right)\right\}$ is linearly independent.

Proof. For $i\in A$ , suppose there exists $1\le {t}_{1}\le t$ such that ${\alpha }_{i}^{*}>0$ for $i=1,2,\cdots ,{t}_{i}$ but ${\alpha }_{i}^{*}=0$ for $i={t}_{1}+1,\cdots ,t$. For $i\in B$ , we always have ${\alpha }_{i}^{*}=0$. For $i\in C$ , we always have ${\alpha }_{i}^{*}={C}_{i}$. The set of gradients

$\left\{{\left(\frac{\partial h}{\partial \alpha }\right)}^{\text{T}},{\left(\frac{\partial {h}_{i}}{\partial \alpha }\right)}^{\text{T}},i\in I\left({w}^{*},{b}^{*},{\psi }^{*}\right)\right\}$ becomes

$\left(\begin{array}{c}{y}_{1}\\ {y}_{2}\\ {y}_{3}\\ ⋮\\ ⋮\\ ⋮\\ ⋮\\ {y}_{m}\end{array}\right),\left(\begin{array}{c}0\\ -1\\ 0\\ ⋮\\ 0\\ 0\\ ⋮\\ 0\end{array}\right),\left(\begin{array}{c}0\\ 0\\ -1\\ ⋮\\ 0\\ 0\\ ⋮\\ 0\end{array}\right),\cdots ,\left(\begin{array}{c}0\\ 0\\ 0\\ ⋮\\ -1\\ 0\\ ⋮\\ 0\end{array}\right),\left(\begin{array}{c}0\\ 0\\ 0\\ ⋮\\ 0\\ 1\\ ⋮\\ 0\end{array}\right),\cdots ,\left(\begin{array}{c}0\\ 0\\ 0\\ ⋮\\ 0\\ 0\\ ⋮\\ 1\end{array}\right)$

Since it is assumed that there is a support vector multiplier ${\alpha }_{i}^{*}$ such that

$0<{\alpha }_{i}^{*}<{C}_{i}$ , the number of gradients in $\left\{{\left(\frac{\partial {h}_{i}}{\partial \alpha }\right)}^{\text{T}},i\in I\left({w}^{*},{b}^{*},{\psi }^{*}\right)\right\}$ must be smaller than the dimension m, which is the dimension of the vector $y={\left(\frac{\partial h}{\partial \alpha }\right)}^{\text{T}}$. Therefore the vector ${\left(\frac{\partial h}{\partial \alpha }\right)}^{\text{T}}=y$ whose m components are either −1 or +1 cannot be expressed as a linear combination of the set of gradients $\left\{{\left(\frac{\partial {h}_{i}}{\partial \alpha }\right)}^{\text{T}},i\in I\left({w}^{*},{b}^{*},{\psi }^{*}\right)\right\}$. Hence the set of constrained gradients is linearly independent. ,

We use the lower case letter p to denote the training input variable and ${p}_{0}$ to denote the training input data. When the form of the kernel function $\varphi \left(x\right)$ is known, the following main Theorem investigates how the errors in the training input data impact on the optimal solution and the corresponding optimal decision function of the SVC model.

Theorem 3. Suppose that ${\alpha }^{*}={\left({\alpha }_{1}^{*},\cdots ,{\alpha }_{m}^{*}\right)}^{\text{T}}$ is the optimal solution of the dual problem (2) when $p={p}_{0}$ , and the corresponding Lagrange multiplier is ${b}^{*}$. Furthermore, ${g}^{*}={\left({g}_{1}^{*},\cdots ,{g}_{m}^{*}\right)}^{\text{T}}$ and ${\psi }^{*}={\left({\psi }_{1}^{*},\cdots ,{\psi }_{m}^{*}\right)}^{\text{T}}={\left({g}_{m+1}^{*},\cdots ,{g}_{2m}^{*}\right)}^{\text{T}}$. Suppose that A category input vectors ${x}_{1},{x}_{2},\cdots ,{x}_{t}$ are all support vectors with $0<{\alpha }_{i}^{*}<{C}_{i},i=1,2,\cdots ,t$ , and the set of vectors

$\left(\begin{array}{c}{y}_{1}\varphi \left({x}_{1}\right)\\ {y}_{1}\end{array}\right),\left(\begin{array}{c}{y}_{2}\varphi \left({x}_{2}\right)\\ {y}_{2}\end{array}\right),\cdots ,\left(\begin{array}{c}{y}_{t}\varphi \left({x}_{t}\right)\\ {y}_{t}\end{array}\right)$

corresponding to points in category A is linearly independent. Then the following results hold.

1) The optimal solution ${\alpha }^{*}={\left({\alpha }_{1}^{*},\cdots ,{\alpha }_{m}^{*}\right)}^{\text{T}}$ is unique, and the corresponding multiplier ${b}^{*}$ as well as ${g}^{*}={\left({g}_{1}^{*},\cdots ,{g}_{m}^{*}\right)}^{\text{T}}$ and ${\psi }^{*}={\left({\psi }_{1}^{*},\cdots ,{\psi }_{m}^{*}\right)}^{\text{T}}={\left({g}_{m+1}^{*},\cdots ,{g}_{2m}^{*}\right)}^{\text{T}}$ are all unique.

2) There is a neighbourhood $N\left({p}_{0}\right)$ of ${p}_{0}$ on which there is a unique continuously differentiable function $y\left(p\right)=\left(\alpha \left(p\right),b\left(p\right),g\left(p\right),\psi \left(p\right)\right)$ such that

a) $y\left({p}_{0}\right)=\left({\alpha }^{*},{b}^{*},{g}^{*},{\psi }^{*}\right)$ ,

b) $\alpha \left(p\right)$ is a feasible solution to the dual problem (2) for any $p\in N\left({p}_{0}\right)$ ,

c) the partial derivatives of $y\left(p\right)=\left(\alpha \left(p\right),b\left(p\right),g\left(p\right),\psi \left(p\right)\right)$ with respect to data parameters satisfy

$M\left(p\right)\left(\begin{array}{c}{\left(\frac{\partial \alpha }{\partial p}\right)}^{\text{T}}\\ {\left(\frac{\partial b}{\partial p}\right)}^{\text{T}}\\ {\left(\frac{\partial g}{\partial p}\right)}^{\text{T}}\\ {\left(\frac{\partial \psi }{\partial p}\right)}^{\text{T}}\end{array}\right)={M}_{1}\left(p\right),$

where ${M}_{1}\left(p\right)=-{\left[\frac{\partial \left({\nabla }_{\alpha }L\right)}{\partial p},{g}_{1}{\nabla }_{p}{\alpha }_{1},\cdots ,{g}_{m}{\nabla }_{p}{\alpha }_{m},{\nabla }_{p}\left({y}^{\text{T}}\alpha \right)\right]}^{\text{T}}$ , ${\nabla }_{\alpha }L$ is the

gradient of L with respect to $\alpha$ , ${\nabla }_{p}{\alpha }_{i},i=1,\cdots ,m$ is the gradient of ${\alpha }_{i}$ with respect to p, ${\nabla }_{p}\left({y}^{\text{T}}\alpha \right)$ is the gradient of ${y}^{\text{T}}\alpha$ with respect to p, and L is the

Lagrange function of the primary problem (1), $M\left(p\right)=\left(\begin{array}{cc}E& F\\ U& V\end{array}\right)$ , and

$E=\left(\begin{array}{cccc}{y}_{1}^{2}K\left({x}_{1},{x}_{1}\right)& {y}_{1}{y}_{2}K\left({x}_{1},{x}_{2}\right)& \cdots & {y}_{1}{y}_{m}K\left({x}_{1},{x}_{m}\right)\\ {y}_{2}{y}_{1}K\left({x}_{2},{x}_{1}\right)& {y}_{2}^{2}K\left({x}_{2},{x}_{2}\right)& \cdots & {y}_{2}{y}_{m}K\left({x}_{2},{x}_{m}\right)\\ ⋮& ⋮& \ddots & ⋮\\ {y}_{m}{y}_{1}K\left({x}_{m},{x}_{1}\right)& {y}_{m}{y}_{2}K\left({x}_{m},{x}_{2}\right)& \cdots & {y}_{m}^{2}K\left({x}_{m},{x}_{m}\right)\end{array}\right),$

$F=\left(\begin{array}{ccccccccc}-1& 0& \cdots & 0& 1& 0& \cdots & 0& {y}_{1}\\ 0& -1& \cdots & 0& 0& 1& \cdots & 0& {y}_{2}\\ ⋮& ⋮& \ddots & ⋮& ⋮& ⋮& \ddots & ⋮& ⋮\\ 0& 0& \cdots & -1& 0& 0& \cdots & 1& {y}_{m}\end{array}\right),$

$U=\left(\begin{array}{cccc}-{g}_{1}^{*}& 0& \cdots & 0\\ 0& -{g}_{2}^{*}& \cdots & 0\\ ⋮& ⋮& \ddots & ⋮\\ 0& 0& \cdots & -{g}_{m}^{*}\\ {\psi }_{1}^{*}& 0& \cdots & 0\\ 0& {\psi }_{2}^{*}& \cdots & 0\\ ⋮& ⋮& \ddots & ⋮\\ 0& 0& \cdots & {\psi }_{m}^{*}\\ {y}_{1}& {y}_{2}& \cdots & {y}_{m}\end{array}\right),$

and

$V=\left(\begin{array}{ccccccccc}-{\alpha }_{1}& 0& \cdots & 0& 0& 0& \cdots & 0& 0\\ 0& -{\alpha }_{2}& \cdots & 0& 0& 0& \cdots & 0& 0\\ ⋮& ⋮& \ddots & ⋮& ⋮& ⋮& \ddots & ⋮& ⋮\\ 0& 0& \cdots & -{\alpha }_{m}& 0& 0& \cdots & 0& 0\\ 0& 0& \cdots & 0& {\alpha }_{1}-{C}_{1}& 0& \cdots & 0& 0\\ 0& 0& \cdots & 0& 0& {\alpha }_{2}-{C}_{2}& \cdots & 0& 0\\ ⋮& ⋮& \ddots & ⋮& ⋮& ⋮& \ddots & ⋮& ⋮\\ 0& 0& \cdots & 0& 0& 0& 0& {\alpha }_{m}-{C}_{m}& 0\\ 0& 0& \cdots & 0& 0& 0& 0& 0& 0\end{array}\right)$

Proof.

Our results follow directly from the Fiacco Theorem after checking the following three conditions required in the Fiacco Theorem:

The Second Order Sufficiency Condition for ${\alpha }^{*}$ ,

Linear Independence of Gradients over the Working Constraint Set,

The Strict Complementarity Property.

Proof of the Second Order Sufficiency Condition. Suppose that ${\alpha }^{*}={\left({\alpha }_{1}^{*},\cdots ,{\alpha }_{m}^{*}\right)}^{\text{T}}$ is the optimal solution to the dual problem. Then there exist multipliers ${b}^{*}\in R,{g}^{*}={\left({g}_{1}^{*},\cdots ,{g}_{m}^{*}\right)}^{\text{T}}\in {R}^{m}$ and ${\psi }^{*}={\left({\psi }_{1}^{*},\cdots ,{\psi }_{m}^{*}\right)}^{\text{T}}={\left({g}_{m+1}^{*},\cdots ,{g}_{2m}^{*}\right)}^{\text{T}}\in {R}^{m}$ satisfying the Kurash-Kuhn-Tucker Condition:

$H{\alpha }^{*}-e+{b}^{*}y-{g}^{*}+{\psi }^{*}=0,$

${g}_{i}^{*}\ge 0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}\text{any}\text{\hspace{0.17em}}i,$

${\psi }_{i}^{*}\ge 0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}\text{any}\text{\hspace{0.17em}}i,$

${\left({g}^{*}\right)}^{\text{T}}{\alpha }^{*}=0,$

${\left({\psi }^{*}\right)}^{\text{T}}\left({\alpha }^{*}-C\right)=0,$

where $H=\left({H}_{ij}\right)$ and $C={\left({C}_{1},\cdots ,{C}_{m}\right)}^{\text{T}}$.

The Hessian matrix corresponding to the Lagrangian function

$\begin{array}{l}L\left({\alpha }^{*},{b}^{*},{g}^{*},{\psi }^{*}\right)\\ =\frac{1}{2}{\left({\alpha }^{*}\right)}^{\text{T}}H{\alpha }^{*}-{\text{e}}^{\text{T}}{\alpha }^{*}+{b}^{*}\left({\left({\alpha }^{*}\right)}^{\text{T}}y\right)-{\left({g}^{*}\right)}^{\text{T}}{\alpha }^{*}+{\left({\psi }^{*}\right)}^{\text{T}}\left({\alpha }^{*}-C\right)\end{array}$

becomes the matrix H.

For $i\in A$ , we have assumed that ${\alpha }_{i}^{*}>0$. For $i\in B$ , we have ${g}_{i}^{*}>0,{\alpha }_{i}^{*}=0$ and ${\psi }_{i}^{*}=0$. For $i\in C$ , we have ${g}_{i}^{*}=0,{\alpha }_{i}^{*}={C}_{i}$ and ${\psi }_{i}^{*}>0$.

Define the set $Z=\left\{z:{z}^{\text{T}}y=0;{z}_{i}\le 0,i=1,\cdots ,t;{z}_{i}=0,i=t+1,\cdots ,m\right\}$. Then for any $z=\left({z}_{1},\cdots ,{z}_{t},0,\cdots ,0\right)\in Z,z\ne 0$ , we have

$\begin{array}{c}{z}^{\text{T}}{\nabla }^{2}L\left({\alpha }^{*},{b}^{*},{g}^{*},{\psi }^{*}\right)z=\left({z}_{1},\cdots ,{z}_{t},0,\cdots ,0\right)H\left(\begin{array}{c}{z}_{1}\\ ⋮\\ {z}_{t}\\ 0\\ ⋮\\ 0\end{array}\right)=\left({z}_{1},\cdots ,{z}_{t}\right){H}^{*}\left(\begin{array}{c}{z}_{1}\\ ⋮\\ {z}_{t}\end{array}\right)\\ ={\left(\underset{i=1}{\overset{t}{\sum }}\text{ }{z}_{i}{y}_{i}\varphi \left({x}_{i}\right)\right)}^{\text{T}}\left(\underset{i=1}{\overset{t}{\sum }}\text{ }{z}_{i}{y}_{i}\varphi \left({x}_{i}\right)\right),\end{array}$

where ${\nabla }^{2}L\left({\alpha }^{*},{b}^{*},{g}^{*},{\psi }^{*}\right)=H$ is the Hessian matrix of $L\left({\alpha }^{*},{b}^{*},{g}^{*},{\psi }^{*}\right)$ and ${H}^{*}$ is the upper left $t×t$ sub-matrix of H.

Suppose that ${z}_{1}{y}_{1}\varphi \left({x}_{1}\right)+\cdots +{z}_{t}{y}_{t}\varphi \left({x}_{t}\right)=0$. Then because the set of vectors

$\left(\begin{array}{c}{y}_{1}\varphi \left({x}_{1}\right)\\ {y}_{1}\end{array}\right),\left(\begin{array}{c}{y}_{2}\varphi \left({x}_{2}\right)\\ {y}_{2}\end{array}\right),\cdots ,\left(\begin{array}{c}{y}_{t}\varphi \left({x}_{t}\right)\\ {y}_{t}\end{array}\right)$

corresponding to points in category A is linearly independent and that ${z}_{1}{y}_{1}+\cdots +{z}_{t}{y}_{t}=0$ , we must have ${z}_{1}=\cdots ={z}_{t}=0$. This contradicts with the assumption that $z\ne 0$. Hence we must have ${z}_{1}{y}_{1}\varphi \left({x}_{1}\right)+\cdots +{z}_{t}{y}_{t}\varphi \left({x}_{t}\right)\ne 0$. This implied that ${z}^{\text{T}}{\nabla }^{2}L\left({\alpha }^{*},{b}^{*},{g}^{*},{\psi }^{*}\right)z>0$ and therefore the Second Order Sufficiency Condition is satisfied by ${\alpha }^{*}$.

Proof of the Linear Independence of Gradients over the Working Constraint Set. This is Lemma 2.

Proof of the Strict Complementarity Property. We need to show that ${g}^{*}$ and ${\alpha }^{*}$ cannot be 0 simultaneously, and ${\psi }^{*}$ and ${\alpha }^{*}-C$ cannot be 0 simultaneously. For $i\in A$ , we have ${\alpha }_{i}^{*}>0$ and hence the intersection between A and the working constraint set is empty because of the assumption ${\alpha }_{i}^{*}<{C}_{i},i\in A$. For $i\in B$ , we have ${\alpha }_{i}^{*}=0$ and multiplier ${g}_{i}^{*}>0$. For $i\in C$ , we have ${\alpha }_{i}^{*}={C}_{i}$ and multiplier ${\psi }_{i}^{*}>0$. Therefore the Strict Complementarity Property holds. ,

3. Conclusion

Support vector classifier plays an important role in machine learning and data mining. Due to the standard results that connect the primary problem and its dual problem, analysis of the primary problem can be achieved by working on its dual problem. Our main result establishes the equation for solving the partial derivatives of the optimal solution and the corresponding optimal decision function with respect to data parameters. Because the derivative measures the rate of change and a large value of the derivative often implies a large rate of change and hence a sensitive and unstable solution, our main result provides the foundation of quantitative analysis based on the derivatives.

Acknowledgements

This work is funded with grants from the Importation and Development of High-Caliber Talents Project of Beijing Municipal Institutions CIT & TCD201404080 and New Start Academic Research Projects of Beijing Union University. Xikui Wang acknowledges research support from the Natural Sciences and Engineering Research Council (NSERC) of Canada and from the University of Manitoba.

Conflicts of Interest

The authors declare no conflicts of interest.

  Vapnik, V.N. and Lerner, A. (1963) Pattern Recognition Using Generalized Portrait Method. Automation and Remote Control, 24, 774-780.  Vapnik, V.N. (1982) Estimation of Dependence Based on Empirical Data. Springer-Verlag, New York.  Cortes, C. and Vapnik, V.N. (1995) Support Vector Networks. Machine Learning 20, 273-297. https://doi.org/10.1007/BF00994018  Deng, N. and Tian, Y. (2004) New Data Mining Method—Support Vector Machine. Science Press, Beijing.  Liu, J., Li, S.C. and Luo, X. (2012) Classification Algorithm of Support Vector Machine via p-Norm Regularization. Acta Automatica Sinica, 38, 76-87. https://doi.org/10.3724/SP.J.1004.2012.00076  Fiacco, A. (1976) Sensitivity Analysis for Nonlinear Programming Using Penalty Methods. Mathematical Programming, 10, 287-331. https://doi.org/10.1007/BF01580677  Liu, B. (1988) Nonlinear Programming. Beijing Institute of Technology Press, Beijing.  Wu, Q. and Yu, Z. (1989) Sensitivity Analysis for Nonlinear Programming Using Generalized Reduced Gradient Method. Journal of Beijing Institute of Technology, 9, 88-96.  Cai, C., Liu, B. and Deng, N. (2006) Second Order Sufficient Conditions Property for Linear Support Vector Classifier. Journal of China Agricultural University, 11, 92-95.  Scholkopf, B., Burges, C.J.C. and Smola, A.J. (1999) Advances in Kernel Methods-Support Vector Learning. MIT Press, Cambridge, MA, 327-352.  Cristianin, N. and Shawe-Taylar, J. (2000) An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods. Cambridge University Press, Cambridge. https://doi.org/10.1017/CBO9780511801389  Li, H. and Zhong, B. (2009) Modified Kernel Function for Support Vector Machines Classifier. Computer Engineering and Applications, 45, 53-55. 