^{1}

^{1}

^{1}

In the field of machining, the processing design of the cutting parameters has the characteristics of greater subjectivity and experience. For obtaining the ideal roughness of surface (Ra), the number of machining experiments is carried out. It makes a lot of waste of materials, labor, energy, and so on. In addition, there is a highly non-linear function relationship between the three elements of cutting: cutting speed (vc), feed (f), cutting depth (ap) and roughness of the surface. It is hard to use mathematical equations to express the relationship clearly. So, in this paper, the support vector machine (SVM) will be used to establish the model of cutting elements and roughness of the surface. Then, taking roughness of surface as optimization goal and cutting elements as optimization parameters, the particle swarm algorithm (PSO) will be carried out to obtain a group of cutting parameters for the ideal roughness surface. It provides an easy, accurate, and feasible optimization design method for machining cutting parameters optimization design.

Stainless steel is an excellent high-performance alloy steel, which can resist corrosion in the air or in chemical corrosive media, and has a wide range of application prospects. Stainless steel has strong toughness, low thermal conductivity, and serious work hardening, which leads to defects such as large cutting force, high cutting temperature, and tools that are prone to adhesion and wear. Therefore, stainless steel is a difficult-to-machine metal material [_{c}), feed (f), and back-cutting (a_{p}). The optimization target is the surface roughness of the workpiece in order to achieve the expected surface quality.

Data-based machine learning is an important aspect of modern intelligent technology. Machine learning is essentially an approximation of a true model of a problem. Research is to start from observation data (samples) to find the rules used to predict unknown data.

Support vector machine (SVM) is a machine learning method developed in the mid-1990s. This method is based on the statistical learning theory. This method is based on statistical learning theory, which improves the generalization ability of learning machine by seeking the structural risk minimization, and completes the minimization of empirical risk and confidence range. Therefore, when the number of statistical samples is small, we can also obtain the purpose of good statistical laws. Because of its outstanding learning performance, this field has become the focus of many scholars. This technology has also become a research hotspot in the machine learning community and has been successfully applied in many fields, such as face recognition, handwritten number recognition, automatic text classification, and machine translation, etc. [

The basic idea of SVM is to use the kernel function to map the input sample space to the high-dimensional feature space, find an optimal classification surface in the high-dimensional space, and obtain the nonlinear relationship between input and output variables [

Assume that the training data set T = { ( x 1 , y 1 ) , ( x 2 , y 2 ) , ⋯ , ( x N , y N ) } , in a given feature space, where x i ∈ R n is the i-th feature vector, also called Is an example; y i ∈ { 1 , − 1 } , i = 1 , 2 , ⋯ , N . N is the class label of x i , when y i = 1 , x i is called a positive example, when y i = − 1 , x i is called Negative case. ( x 1 , y 1 ) is the sample point. The key of the algorithm is to establish a classification hyperplane as the decision surface, which maximizes the separation edge of positive and negative examples. The classification hyperplane is to find the function:

φ ( w ) = 1 2 ‖ w ‖ 2 (1)

s .t . y i ( w ⋅ x i + b ) ≥ 1 , i = 1 , 2 , ⋯ , N

where: w is the normal vector of the hyperplane, b is the constant term of the hyperplane, x_{i} is the training sample, and y_{i} is the type of the sample.

In practice, there may be linear inseparability. At this time, it can be mapped to a high-dimensional space. When the sample is linear and inseparable, mapping it to a high-dimensional space will have a particularly large dimension, which makes calculation difficult. At this time, the kernel function plays an important role in dealing with the problem. Its value lies in the fact that although the feature is also converted from low-dimensional to high-dimensional, the difference is that the method will calculate in low-dimensional in advance, and then it will essentially. The classification effect is manifested in high dimensions, thus avoiding complicated calculations directly in high-dimensional space.

In practical applications, we often rely on prior domain theoretical knowledge to select an effective kernel function. The widely used kernel functions mainly include:

Polynomial kernel and Gaussian kernel linear kernel function:

k ( x 1 , x 2 ) = ( 〈 x 1 , x 2 〉 + R ) d (2)

k ( x 1 , x 2 ) = exp { − ‖ x 1 − x 2 ‖ 2 2 σ 2 } (3)

k ( x 1 , x 2 ) = 〈 x 1 , x 2 〉 (4)

According to the different problems and data, choosing different parameters will actually get different kernel functions. At the same time, different selection of kernel function parameters will directly affect the prediction accuracy and classification performance of the support vector machine.

Particle Swarm Optimization Algorithm (PSO)Particle swarm optimization (PSO) was first proposed by Kennedy [

The PSO algorithm is to simulate the intelligent search of the community generated by the mutual cooperation and competition between bird groups [

{ V i k + 1 = w V i k + c 1 r 1 ( p i k − X i k ) + c 2 r 2 ( p g k − X i k ) X i k + 1 = X i k + V i k + 1 (5)

In the formula, w is the inertia weight, v is the current iteration number, i = 1 , 2 , ⋯ , m ; r_{1}, r_{2} is a random number uniformly distributed in the interval [0,1], c_{1}, c_{2} is the learning factor, p i k is the optimal solution for the K-th iteration, and p g k is the global optimal so far solution.

This article has 49 sets of experimental data, some of which are shown in

Among them, a_{p}, v_{c} and f are used as input variables, and the surface roughness Ra is used as the output variable to randomly divide the training set and the test set. After the 49 groups of samples were randomly shuffled, the first 40 groups of samples were used as the training set, and the last 19 groups were used as the test set. The overall algorithm flow is shown in

The SVM regression toolbox in Matlab is used to set it, and the input training set samples are automatically standardized by the solver to remove the influence of dimensions. The kernel function adopts the Gaussian kernel function, which automatically adjusts the super error in the optimization solver. The mean square error and coefficient of determination are used as indicators to evaluate the accuracy of the model. The modeling results are shown in

The test set is input into the model for testing, and the comparison of model prediction results is shown in ^{2} = 0.908, which can be used to characterize the mapping relationship between the three elements of cutting and surface roughness.

NO. | a_{p} (mm) | v_{c} (mm/rev) | f (mm) | Ra (μm) |
---|---|---|---|---|

1 | 0.40 | 250 | 0.125 | 2.45 |

2 | 0.25 | 350 | 0.075 | 1.09 |

3 | 0.35 | 200 | 0.050 | 0.99 |

4 | 0.20 | 200 | 0.075 | 0.90 |

5 | 0.20 | 300 | 0.050 | 0.69 |

F i t n e s s = | 0.5 − S V M ( a p , v c , f ) | (6)

In the current cutting process, the process design of cutting parameters has

greater subjectivity and experience. In order to achieve the expected surface roughness of the workpiece processing, this paper uses a support vector machine (SVM) to establish a model between the three elements of cutting and the surface roughness. Using the particle swarm algorithm, taking the surface roughness as the optimization goal, through parameter analysis and optimization, the following conclusions are obtained:

1) For difficult-to-machine materials such as 304 stainless steel, Support vector machine (SVM) is used for modeling. The mapping model between cutting parameters and surface roughness has higher accuracy and smaller error, which can provide reference for parameter optimization or theoretical derivation.

2) The particle swarm algorithm PSO is used to optimize the cutting parameters, and a set of feasible optimized cutting parameters can be obtained according to the expected surface roughness value, which provides a rationale for the process personnel when designing processing parameters and saves experimental costs.

3) Compared with other intelligent modeling algorithms and optimization algorithms, SVM-PSO is more convenient, concise and efficient, and provides a new idea for process parameters optimization design.

I would like to express my gratitude to all those who have helped me during the writing of this thesis. I gratefully acknowledge the help of my coworker Liu. I do appreciate his patient encouragement, and professional suggestions during my thesis writing.

The authors declare no conflicts of interest regarding the publication of this paper.

Yang, C.W., Jiang, H. and Liu, B.L. (2020) Optimization Design of Cutting Parameters Based on the Support Vector Machine and Particle Swarm Algorithm. Open Access Library Journal, 7: e6788. https://doi.org/10.4236/oalib.1106788