Fully Connected Convolutional Neural Network in PCB Soldering Point Inspection

Abstract

In Electronics Manufacturing Services (EMS) industry, Printed Circuit Board (PCB) inspection is tricky and hard, especially for soldering point inspection due to the extremely tiny size and inconsistent appearance for uneven heating in reflow soldering process. Conventional computer vision technique based on OpenCV or Halcon usually cause false positive call for originally good soldering point on PCB because OpenCV or Halcon use the pre-defined threshold in color proportion for deciding whether the specific soldering point is OK or NG (not good). However, soldering point forms are various after heating in reflow soldering process. This paper puts forward a VGG structure deep convolutional neural network, which is named SolderNet for processing soldering point after reflow heating process to effectively inspect soldering point status, reduce omission rate and error rate, and increase first pass rate. SolderNet consists of 11 hidden convolution layers and 3 densely connected layers. Accuracy reports are divided into OK point recognition and NG point recognition. For OK soldering point recognition, 92% is achieved. For NG soldering point recognition, 99% is achieved. The dataset is collected from KAGA Co. Ltd Plant in Suzhou. First pass rate at KAGA plant is increased from 25% to 80% in general.

Share and Cite:

Cai, B. (2022) Fully Connected Convolutional Neural Network in PCB Soldering Point Inspection. Journal of Computer and Communications, 10, 62-70. doi: 10.4236/jcc.2022.1012005.

1. Introduction

Labor-intensive industrial production has been unable to meet the needs of modern manufacturing. In the past two decades, automation technology has been widely used in all aspects of industrial production to improve production efficiency. However, in the industrial field, especially the Surface Mount Technology (SMT) production line involving solder joint inspection, whether it is early manual manufacturing or automated production, defective products will be produced due to various reasons. These defective products cannot flow out of the production line, otherwise it will cause immeasurable losses. Therefore, the factory will be equipped with professional inspectors at the back end of each production line to visually detect whether the solder joints of components are defective. However, due to long-term high-load work and the closed environment inside the factory, it is easy to cause fatigue, and many defective products flow out due to personnel mistakes, and due to non-standard management, many domestic factories do not record the material number, resulting in the end, there was a security problem in the terminal product, but the specific link could not be tracked. Therefore, in large factories, a very common scene is that various machines and equipment are manufactured and produced in the front, while the back is densely inspected by humans.

In order to replace manual work, in the past ten years, many automation equipment manufacturers have used image template matching to replace manual detection of solder joint defects, and began to develop automatic visual inspection. However, the current automatic visual inspection equipment is based on the principle of traditional image pattern recognition. Although some defects can be detected by grayscale comparison, in the real production environment, due to the uneven heating and deviation of the production process, many good products will have more or less different color distribution, or there are differences in appearance. These normal differences, which are well discriminated by the human eye, are a problem for solder joint detection that cannot be handled by automatic visual inspection equipment. In order to ensure that the bad solder joints do not flow out, the automatic visual inspection equipment will judge nearly 50% of the good products as bad products. The high over-inspection rate not only did not reduce labor, but instead caused duplication of labor. Once the machine detects it, the manual must also detect it again, and the current average price of automatic visual inspection equipment is about 300,000 yuan. The 3D detector adopts the method of three-dimensional size measurement. The accuracy is increased to 85% - 90%, but the cost is around 1 million, which many small and medium-sized factories cannot afford. Due to the problem of visual inspection, the factory has not been able to move from automation to intelligence. In the visual inspection part, there are still labor-intensive characteristics, and the visual inspection part has high requirements for workers’ professional skills. All visual inspection workers need about 3 months of professional training before they can take up jobs. However, due to the harsh and harsh environment in the production workshop, the turnover rate of factory inspectors is high, and the high labor cost is a thorny problem faced by contemporary factories. For various reasons, modern factories must rely on artificial intelligence technology to fundamentally solve the only labor-intensive field in factory production—visual inspection.

2. Literature Review

Deep learning, also known as layered representations learning or hierarchical representations learning, is a class of machine learning algorithms that uses tens or even hundreds of successive layers of representations to learn automatically from exposure to training data [1]. These layered representations in deep learning are learned via neural networks, structured in literal layers stacked on top of each other. The neural networks are artificial and were developed in part by drawing inspiration from our understanding of the brain.

For solder point inspection, Kawai (2016) states that if wrong temperature conditions or unbalanced heating are used during the soldering process, solder point is high likely to be insufficient spreading [2]. With the assist of high-resolution cameras, novel lighting devices, illumination techniques for hardware equipment, digital computing, image processing, and machine vision for software algorithm, Automatic Optical Inspection (AOI) machines were invented among these years to proceed solder point image after reflow process [3] [4] [5]. Although the invention of AOI machines can detect most of defect images, due to multiform of possible solder point appearance, conventional computer vision technique cannot achieve both low omission rate and error rate for lacking of generalization ability [6] [7] [8]. Li et al. (2021) proposed an inspection method via generator-adversarial-network based template, which consists of GAN template generator training, offline modelling and real-time online inspection [9]. They achieved 0.15% error rate. However, it is specific to IC solder and cannot be used to inspect all kinds of solder point. Some other researchers are investigating the modern deep learning technique in effectively detecting defect solder point and filter OK point with low omission rate and error rate.

3. Methodology

Convolutional neural networks (CNNs) are used mainly in image processing and computer vision. Layers within the CNN are composed of neurons organized into three dimensions: the spatial dimensionality of the input (height and width) and the depth. Neurons within any given layer connect only to a small region of the layer preceding them, which is convoluted to each other. Three types of layers (convolutional layers, pooling layers, and fully-connected layers) make up CNNs.

The convolutional layer will determine the output of the neurons that are connected to local regions of the input through the calculation of the scalar product between their weights and the region connected to the input volume. The pooling layer will perform down-sampling along the spatial dimensionality of the given input, further reducing the number of parameters within that activation. The fully connected layers will produce class scores from the activations for classification.

During training, the input to the convolutional neural network is a 250 × 250 RGB image. In this study, the image was preprocessed by converting JPEG content to RGB grids of pixels. Then the RGB grids of pixels were converted into floating-point tensors and the mean RGB value was subtracted from each pixel. The pixel values were then rescaled from the original 0 to 255 to the final [0, 1] interval. Moreover, in order to avoid the overfitting problem, data augmentation methods were applied to generate more training data from existing samples to ensure that in the training process the model would never see the same picture twice. The rotation range of the pictures was set at 40, meaning that the pictures would be rotated randomly in value from 0 to 40 degrees. Shear and shift range were set to be 0.2, meaning that the pictures would be randomly translated and sheared vertically and horizontally as a fraction of 0.2 of the general size. Feature-wise standardized normalization was also applied to divide inputs by standard deviation of the data set.

The image was passed through a stack of convolutional layers with filters set to be a small receptive filed 3 × 3 to capture the notion of left/right, up/down, center, in the smallest size. The convolution stride was set to be 1 pixel, and the padding was set to be 1 pixel for 3 × 3 convolutional layers. Max-pooling was performed over a 2 × 2 pixel window with stride 2. The activation function for all hidden layers in the convolutional layers was the rectification non-linearity (ReLU) function.

The overall recognition process is displayed as follows, which is shown in Figure 1.

CNN transforms the original input layer by layer using convolutional and down-sampling techniques to produce class scores for classification and regression purposes. The following plot is the visualization of activations taken from the randomly selected convolutional layers (the first layer and the fourth layer) of the deep learning neural network built in this study. Figure 2 shows the inspection process for different network layers.

It is easy to see that the convolutional layers have successfully picked characteristics unique to specific facial features. Different convolutional layers scan

Figure 1. Flow chart for soldering point inspection process.

(a) First layer (b) Second layer (c) Third layer(d) Fourth layer(e) Fifth layer(f) Sixth layer(g) Seventh layer(h) Eighth layer

Figure 2. Visualization of convolutional neural network.

different facial expression features and create feature maps through learned filters to summarize the presence of features. The fully connected layer combines all the convolutional layers to produce the final classification score for a certain participant.

Graphic Processing Unit Quadro P-4000 was used to train the model in Tensorflow and provide the predictions on the labels of images. The training process is carried out by minimizing loss function of binary cross entropy using mini-batch gradient descent with momentum. Weight decay of 5 × 10−4 and dropout regularization for the first fully-connected layers with dropout ratio to be 0.5 were applied to training process. Learning rate was set to 10−4 to avoid the missingness of local minima. During the training process, weights were being updated for each batch.

4. Results Analysis

The soldering point expression recognized accuracy by using convolutional neural network is 92% for OK points and 99% for NG points. NG points detection is more important than OK points detection since if there are NG points that cannot be identified, it will lead to PCB defects and cannot be sold to the market. However, OK and NG detection is a trade-off, which means the higher NG detection accuracy may result in lower OK detection accuracy. Apart from optimizing detection technique, other ways are not doable for increasing one side and let the other constant.

The difference between OK and NG points is not obvious in nature. Due to uneven heating in the reflow process, the difference is extremely harder to detect even for professional human inspector. Figure 3 exhibits OK and NG solder point image.

It is easy to find that the difference between NG and OK is extremely tiny and even the image is greatly enlarged, and the difference is still hard to detect.

5. Discussion

Taking the general industrial defect detection system of the company’s main product as an example, by learning from tens of thousands of pictures of solder joints, and producing pictures of defective products by using small samples to make big data, it replaces professional technicians to detect patch products. Advantages and disadvantages of solder joints on circuit boards after wire reflow soldering. This product can cooperate with the traditional 2D automatic visual inspection system (equipment), instead of professional inspectors, to inspect a large number of solder joints that are misjudged as defective products by AOI equipment, leaving the solder joints of truly defective products and leaving most of the good products.

Nowadays, some companies invented 3D AOI inspection machine to add the third light source on the z-axis. This light source usually is camera. By adding this camera, one can detect bumps in the surface of PCB board which is the hardest part to recognize in 2D environment that human can only judge the goodness of point by looking at the tiny difference of projections of bumps. This obstacle also stuck deep learning algorithm in proceeding 2D images. Therefore, for validating the stability and generalizability of our model, more data should be collected. Integrated methods with conventional pattern recognition technique may also be considered in the future study that using pattern recognition to filter OK point first with zero error rate, and then build neural network for defect point inspection. Other improvements such as increasing light condition and change reflow heating mechanism can also be considered.

NG OK

Figure 3. NG point vs. OK point.

6. Conclusions

The general industrial defect detection system project developed by this paper is a system developed to detect various industrial defects by simulating human brain image processing analysis and thinking decision-making process through deep neural network [10] [11]. Taking the intelligent solder joint inspection system as an example, the intelligent solder joint inspection is based on the 2D AOI equipment, adding a deep learning algorithm model, combining with big data technology, training more than 30 million parameters, scanning every pixel of the component, and then passing the volume It is obtained by layered decomposition image discrimination. The role of a convolutional layer is equivalent to a nucleus of a human brain, but its role is more specific than that of a human brain nucleus. A convolutional layer only checks a specific area of the image, and it superimposes dozens of convolutional layers to detect the detailed contours of the image in an all-round way, and then transmits it to the fully connected layer, and then comprehensively analyzes the image detection results of different parts obtained by the convolutional layer, get the quality of the component solder joints, and accurately locate the defective parts. The intelligent visual inspection system developed through deep learning can make the detection accuracy of 2D equipment exceed that of 3D equipment without adding any hardware equipment, that is, the detection rate of good products can reach 92%, and the detection rate of defective products can reach 99%. This not only improves the production efficiency of the factory, but also saves costs.

Applying the principle of solder joint inspection, intelligent visual inspection technology can also be used in all aspects of the factory production line, including the “wrong insertion, missing insertion, reverse insertion” detection of the circuit board of the plug-in production line, chip appearance inspection, mobile phone case defect detection, welding false detection, etc., and can even be used in industrial derivative industries, such as the identification of male and female chickens in animal husbandry, and the detection of abnormal driving states (road rage, fatigue, distraction) of drivers in automatic driving assistance systems in the automotive industry, etc., which has broad application prospects.

Funding Statement

This study was sponsored by Shanghai Pujiang Program (20PJ1418400).

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

References

[1] Lecun, Y., Bengio, Y. and Hinton, G. (2015) Deep Learning. Nature, 521, 436-444.
https://doi.org/10.1038/nature14539
[2] Li, J.M., Cai, N., Mo, Z.K., Zhou, G. and Wang, H. (2021) IC Solder Joint Inspection via Generator-Adversarial-Network Based Template. Machine Vision and Applications, 32, Article No. 96.
https://doi.org/10.1007/s00138-021-01218-1
[3] Kawaii, K. (2004) Inspection to Improve Lead-Free Solder Technologies. SMTA International Conference Proceedings, New York, 2004.
[4] Hornberg, A. (2006) Handbook of Machine Vision. Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
[5] Davies, E.R. (2005) Machine Vision: Theory, Algorithms, Practicalities. Morgan Kaufmann, San Francisco.
[6] Gasvik, K.J. (2002) Optical Metrology. 3rd Edition, John Wiley & Sons Ltd., Trondheim.
[7] Suck, T.T. (2002) Controlling the Process: Post-Reflow AOI (Automated Optical Inspection) to Ascertain Machine and Process Capability.
https://cdn.intechopen.com/pdfs/38577/InTech-Automatic_optical_inspection_of_soldering.pdf
[8] Norris, M.J. (2002) Advances in Automatic Optical Inspection, Gray Scale Correlation vs. Vectoral Imaging. Journal of Surface Mount Technology, 15, 978-985.
[9] Yang, C.C., Marefat, M.M. and Ciarallo, F.W. (1998) Error Analysis and Planning Accuracy for Dimensional Measurement in Active Vision Inspection. IEEE Transactions on Robotics and Automation, 14, 476-487.
https://doi.org/10.1007/s00138-021-01218-1
[10] Kokott, J. (2006) The Capability of Modern AOI Systems. Global SMT & Packaging, 6, 16-17.
[11] Holzmann, M. (2004) AOI in a High-Mix/Low-Volume Environment. Circuits Assembly, 23, 30-35.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.