Deep learning Optical Character Recognition in PCB Dark Silk Recognition

Abstract

For Automatic Optical Inspection (AOI) machines that were introduced to Printed Circuit Board market more than five years ago, illumination technique and light devices are outdated. Images captured by old AOI machines are not easy to be recognized by typical optical character recognition (OCR) algorithms, especially for dark silk. How to effectively increase silk recognition accuracy is indispensable for improving overall production efficiency in SMT plant. This paper uses fine tuned Character Region Awareness for Text Detection (CRAFT) method to build model for dark silk recognition. CRAFT model consists of a structure similar to U-net, followed by VGG based convolutional neural network. Continuous two-dimensional Gaussian distribution was used for the annotation of image segmentation. CRAFT model is good at recognizing different types of printed characters with high accuracy and transferability. Results show that with the help of CRAFT model, accuracy for OK board is 95% (error rate is 5%), and accuracy for NG board is 100% (omission rate is 0%).

Share and Cite:

Cai, B. (2023) Deep learning Optical Character Recognition in PCB Dark Silk Recognition. World Journal of Engineering and Technology, 11, 1-9. doi: 10.4236/wjet.2023.111001.

1. Introduction

Silk recognition, being one of the most important parts of AOI inspection, plays a crucial role in tracking produced PCB board. However, due to uneven heating in the reflow process, outdated illumination techniques, and insufficient lighting technology, silk recognition is particularly hard for most PCB plants. Error rate is quit high that plenty of good boards are classified into bad boards, which results in laborious human re-identification after AOI machine.

The key technologies that the project needs to solve include how to establish a broad and general model on limited industrial defect samples; how to imitate the way of thinking of the human brain to accurately model industrial inspection objects such as silk screen printing; how to minimize both omission rate and error rate in the same time. In view of the difficulty of limited industrial defect samples, the technology of “small samples to make big data” developed by our company at the beginning of this year is to combine deep adversarial neural network with traditional computer vision technology to generate “unlimited” data with “limited” data. In view of the difficulty of imitating human brain thinking in modeling, we invite professional inspectors to label the data on the spot, mark the wrong points for each defective sample that flows through, and construct a convolutional neural network Network model, learning and verifying images, and performing reinforcement learning on misjudged images to achieve accurate modeling. In order to prevent defective products from flowing out and improve the pass rate of good products, our team has set parameters in the model adjustable space, by setting dynamic tuning threshold to block defective products and improve the pass rate of good products.

2. Literature Review

The typical AOI machine is composed of optics, camera, and illumination module [1]. LED based lighting is used for most AOI illumination purposes. However, the amount of illumination that used is just for the purpose. Although the programmable illumination module can be applied to decide how much illumination should be used, it also increases the risk of reduced Field of View (FOV) and inhomogeneous lights distribution [2] [3]. Enlarge the detection area may also decrease error rate from dark silk inspection, but more computation cost is needed and the field of view is reduced [4].

Currently, some 3D AOI machine by equipping with advanced illumination technique can handle well with this issue [5]. 3D inspection adds the third camera to measure the height of bumps [6] [7]. Nevertheless, there are tens thousand of produced AOI machine still working in the stock market. How to improve inspection accuracy for old AOI machine needs to be resolved. This research intends to embed a deep learning module on the 2D automatic visual inspection equipment, and continuously optimize the model by learning the experience of manual discrimination, so that the 2D inspection equipment can meet or exceed the effect of the 3D inspection equipment. The method adopted in the project research is a deep learning multi-channel VGG based convolutional neural network (CNN) CRAFT technology to recognize dark silk. Figure 1 displays the recognition results on different printed character boards.

Different from the traditional pattern matching algorithm, the general industrial defect detection system project developed by our team is a system developed to detect various industrial defects through deep neural network simulation of human brain image processing analysis and thinking decision-making

Figure 1. Illustration of CRAFT model application to printed character recognition.

Figure 2. Flow chart of general labeling and training process.

process. The intelligent dark silk joint inspection system initiated in this paper is based on 2D-AOI equipment, adding a deep learning algorithm model, combining with big data technology, imitating the human brain’s discriminative thinking, and training through more than 30 million parameters. The CNN model decomposes the image through the convolution layer to accurately locate silk position and determine what does dark silk represent. Figure 2 shows the flow chart for the proposed silk recognition procedure.

3. Methodology

The backbone convolutional network of CRAFT is VGG16. On this basis, the author uses a structure similar to U-net, combining shallow and deep convolutional features as the output, effectively retaining the shallow structural features and deep semantics feature, this idea has been widely used in various STR detection models. After U-net, the network adds a series of convolution operation layers [8]. The final 1x1 convolution layer uses two convolution kernels to output two branch results. The first branch is the probability that each pixel is in the center of the character (position score), and the second branch is the probability that each pixel is in the character gap (neighborhood score). Through these two layers of output, we can get the character position and the connection between characters respectively, and then integrate the result into a text box [9]. Figure 3 displays the neural network structure for CRAFT Net.

Figure 3. The backbone network of CRAFT.

In previous work, the annotation of image segmentation is often either 0 or 1, that is, a certain pixel is either in the text line or outside the text line. However, even for pixels within lines of text, there is a distinction between center and edge. In CRAFT, the annotation of image segmentation is a continuous two-dimensional Gaussian distribution [10]. The pixels located in the center of the character box have a higher position score, while the pixels located at the edge of the character box have a lower position score, so the model makes full use of the pixels location information of the point [11] [12]. When obtaining the neighborhood sub-label, we first connect the diagonal lines of the character box quadrilateral, as shown by the blue solid line in the Affinity Box Generation on the left side of the figure above [8]. Next, we find the barycenters (blue crosses) of the upper and lower triangles respectively. Two adjacent characters have a total of four triangle barycenters, and we define the quadrilateral formed by them as the neighborhood box. Finally, we use the same method as the previous position score to generate a Gaussian distribution within the neighborhood box, thereby obtaining the neighborhood score. The final result can be seen in the heat map on the far right of the above figure.

How to obtain reliable character box annotation from text box annotation is the biggest highlight of this paper. CRAFT adopts the method of weakly supervised learning, which effectively solves this problem. In the early stage of training, the training set we use is a synthetic non-real picture, and the synthetic picture has accurate label information of the character frame, so it can be used directly. Synthetic images have similar but not identical data features to real images, which can provide limited assistance for model training. When the model has a certain predictive ability, we start to use real pictures [13] [14] [15].

Due to the lack of character frame annotations in real pictures, the following training scheme is adopted in the article: first, we cut out the text lines, and use the currently trained model to predict the position score of each pixel; then from the distribution of position scores, we can separate the number and position of character boxes judged by the current model, and use these character boxes as annotations to train the model. Since the accuracy of the character box predicted by the model is not guaranteed at this time, when accounting for the loss function, we need to multiply the corresponding loss by a confidence probability. It should be noted that the actual number of characters (text annotation length) is known, the only unknown is the position of the character box. Therefore, we can use the difference between the prediction and the actual number of characters to measure the accuracy of the prediction, that is, the confidence probability = 1, the difference in the number of characters/the actual number of characters. For example, in the figure below, the confidence probabilities for three text lines are 6/6, 5/7, and 5/6, respectively. It should be noted that in order to ensure the effectiveness of this training mode, the author also incorporates a lower ratio (1:5) of synthetic images with accurate character box annotations in this step of training (Baek et al., 2019) [8]. Figure 4 displays the clear schematic diagram to depict the training scheme.

4. Results Analysis

Using the character detection of the vision system technology, the recognized character information can be written to the server and other locations, and there

Figure 4. Illustration of CRAFT model training process.

is no need to worry about misrepresentation. Product numbers with a large number of characters are also prone to misrepresentation. By digitizing information, problems can be prevented in advance.

In addition, after the full implementation of information management such as product numbers, even if there are problems such as outflow of defective products, recalls, etc., the corresponding parts and products can be tracked and recovered quickly. It is also possible to trace back and identify problematic processes, which is beneficial for improving business. This effective information management also effectively ensures traceability.

The OCR detection technique completes product inspection with minimum man-hours. In order to prevent the outflow of defective products, whole-product inspection is an effective method, but visual character inspection is not only time-consuming and labor-intensive, but also has the risk of “missing inspection”. As long as the vision system is used, accurate character detection can be performed online, while achieving quality assurance and labor cost reduction. It can eliminate the yield in the inspection process, and improve the production efficiency very effectively.

After CRAFT model training, the dark silk character inspection achieves 100% accuracy rate for NG board recognition and 95% accuracy rate for OK board recognition. The following figure displays the OCR inspection results from CRAFT model. Figure 5 displays the results for dark silk recognition.

It is easy to find that after CRAFT image processing, characters in dark silk board appear, even for some characters that human eyes cannot read clearly.

The technique can detect OCR/barcode/QR code, and also suitable for multi-size letters, number sign, and special characters. Product material/characteristic include silk screen printing ink, laser etching, and solution corrosion. The detection accuracy can reach 0.004mm with processing speed as 3400 p/h.

5. Discussion

The traditional AOI detection steps are as follows.

1) Program AOI and learn related PCB and component data.

2) Learn to predict, use optics for detection and algorithm analysis of multiple

Figure 5. Results for dark silk recognition

welded boards, find out the change rule of the object to be tested, and establish a standard OK board model

3) After the study is completed, conduct online debugging, conduct small batch trial production before mass production, compare the trial production PCBA with the OK board, and then manually inspect it if it is qualified.

4) Carry out functional test on the trial production PCBA, if all are normal, then open for mass production.

Traditional AOI online inspection greatly improves the productivity of the production line, which can replace a large number of manual visual inspections, save labor, improve the pass-through rate and reduce the misjudgment rate. However, some components are relatively high or the pins are relatively high, so there will be shadows or local dark parts, AOI It is optical inspection. Generally, these are difficult to illuminate, so there may be dead spots. Therefore 1-2 manual visual inspection worker needs to be assigned after AOI to minimize defective products [16] [17] [18].

Taking DIP process wave soldering furnace inspection as an example, there are many types of defects and complex shapes. The traditional algorithm based on OK rule is difficult to be compatible with the multi-morphological characteristics of solder joints. The misjudgment screening rate is about 70%, which greatly increases the operator’s complexity The workload of the judgment is also easy to cause operator fatigue, which increases the risk of missed detection.

In addition, the shape of the solder joints of wave soldering varies greatly, and the traditional algorithm needs to be debugged for each type of solder joints, which greatly increases the debugging time. At the same time, the traditional AOI operation is complex, and it also requires the proficiency of personnel. Once the personnel flow, it is difficult to continue the equipment detection effect, which will affect the production efficiency.

6. Conclusions

For this paper’s focus, our proposed method is mainly used to detect the screen printing on the front and bottom of the product and front screen printing defect inspection. The main function includes location check, Color and character content recognition and reading, as well as making judgments in full accordance with the set standards to reduce the subjective misjudgment of workers. Therefore, it can detect more than two characters at the same time, reducing the number of stations and improving the production efficiency. It can reach high practicability, suitable for various detection of different product types. As a result, upgrade the production line.

Compared to solder point inspection, dark silk inspection is relatively easier. Forms of solder point are plenty and due to uneven heating in reflow process of solder point, many OK points look similar to NG points. However, dark silk recognition is not very tricky in recognizing different forms of defects and gaps are significant for different forms of solder point. Inner uniform for dark silk is high. There are no more than 30 types of printed characters. The most difficult part is how to locate, and highlight dark characters. CRAFT model provides the way to do so.

The requirements for algorithms and computing power in practical application scenarios continue to increase. Industrial vision realizes its functions by reading and analyzing images and videos of real scenes. The amount of data contained in images and videos is large enough, but there is also a lot of redundant information. A single simple feature extraction algorithm is difficult to meet the universal requirements of the algorithm. At the same time, with the complexity of application scenarios and functions, the requirements for computing power and storage speed are constantly increasing when designing universal feature extraction algorithms. Here comes the problem of relatively high development costs and product prices.

Funding Statement

This study was sponsored by Shanghai Pujiang Program (20PJ1418400).

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Fan, K.C. and Hsu, C. (2005) Strategic Planning of Developing Automatic Optical Inspection (AOI) Technologies in Taiwan. Journal of Physics, Conference Series, 13, 394-397.
https://doi.org/10.1088/1742-6596/13/1/090
[2] Krippner, P. (2006) T&M, Zero-Defect IC Inspection Strategy with AOI, 2006 Electronics Manufacturing. In: Janóczki, M., Becker, á., Jakab, L., Gróf, R. and Takács, T., eds., Automatic Optical Inspection of Soldering.
https://cdn.intechopen.com/pdfs/38577/InTech-Automatic_optical_inspection_of_soldering.pdf
[3] Shirvaikar, M. (2006) Trends in Automated Visual Inspection. Journal of Real-Time Image Processing, 1, 41-43.
https://doi.org/10.1007/s11554-006-0009-6
[4] Koo, J.H. and Yoo, S.I. (1998) A Structural Matching for Two-Dimensional Visual Pattern Inspection. 1998 IEEE International Conference on Systems, Man and Cybernetics, Diego, CA, 14 October 1998, 4429-4434.
[5] Wu, W.-Y., Wang, M.-J.J. and Liu, C.-M. (1996) Automated Inspection of Printed Circuit Boards through Machine Vision. Computers in Industry, 28, 103-111.
https://doi.org/10.1016/0166-3615(95)00063-1
[6] Rau, H. and Wu, C.H. (2005) Automatic Optical Inspection for Detecting Defects on Printed Circuit Board Inner Layers. The International Journal of Advanced Manufacturing Technology, 25, 940-946.
https://doi.org/10.1007/s00170-004-2299-9
[7] Ries, B. (2001) New Advances in AOI Technologies. Surface Mount Technology, 23, 62-66.
[8] Baek, Y.M., Lee, B., Han, D., Yun, S.D. and Lee, H. (2019) Character Region Awareness for Text Detection.
https://doi.org/10.48550/arXiv.1904.01941
[9] Tietze, H. and Kokott, J. (2013) (GOEPEL Electronic GmbH) Application of AOI Systems in Backplane Manufacturing. In: Janóczki, M., Becker, á., Jakab, L., Gróf, R. and Takács, T., eds., Automatic Optical Inspection of Soldering.
https://cdn.intechopen.com/pdfs/38577/InTech-Automatic_optical_inspection_of_soldering.pdf
[10] Suck, T.T. (2002) Controlling the Process: Post-Reflow AOI (Automated Optical Inspection) to Ascertain Machine and Process Capability.
http://www.advprecision.com/pdf/AOI_Process_Control.pdf
[11] Miller, D. (2009) Exploring AOI and X-Ray.
http://www.dataweek.co.za/news.aspx?pklNewsId=31727&pklCategoryID=49
[12] Norris, M.J. (2002) Advances in Automatic Optical Inspection, Gray Scale Correlation vs. Vectoral Imaging. Journal of Surface Mount Technology, 15, 238-245.
[13] Yang, C.C., Michael, M.M. and Ciarallo, F.W. (1998) Error Analysis and Planning Accuracy for Dimensional Measurement in Active Vision Inspection. IEEE Transactions on Robotics and Automation, 14, 476-487.
https://doi.org/10.1109/70.678456
[14] Kokott, J. (2006) The Capability of Modern AOI Systems. Global SMT & Packaging: European Edition (Surface Mount Technology), 6, 16-17.
[15] Holzmann, M. (2004) AOI in a High-Mix/Low-Volume Environment: The Cost Savings When Using AOI Can Pay for the System on One Job under the Right Circumstances. Circuits Assembly, Amesbury MA, 30-35.
[16] Fidan, I., Kraft, R.P., Ruff, L.E. and Derby, S.J. (1998) Designed Experiments to Investigate the Solder Joint Quality Output of a Prototype Automated Surface Mount Replacement System. IEEE Transactions on Components, Packaging, Manufacturing Technology, 21, 172-181.
https://doi.org/10.1109/3476.720414
[17] Pan, J., Tonkay, G.L., Storer, R.H., Sallade, R.M. and Leandri, D.J. (1999) Critical Variables of Solder Paste Stencil Printing for Micro-BGA and Fine Pitch QFP. 24th IEEE/CPMT International Electronics Manufacturing Technology Symposium, Austin, 19-19 October 1999, 94-101.
[18] He, D., Ekere, N.N. and Currie, M.A. (1998) The Behavior of Solder Pastes in Stencil Printing with Vibrating Squeegee. IEEE Transactions on Components, Packaging, Manufacturing Technology, 21, 317-324.
https://doi.org/10.1109/TCPMC.1998.7102530

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.