Scientific Research

An Academic Publisher

Handwritten Numeric and Alphabetic Character Recognition and Signature Verification Using Neural Network

**Author(s)**Leave a comment

^{}, Md. Badrul Alam Miah

^{}, Ahsan Habib

^{}, Autish Chandra Moulik

^{}, Md. Shariful Islam

^{}, Mohammad Zakareya

^{}, Arafat Ullah

^{}, Md. Atiqur Rahman

^{}, Md. Al Hasan

^{}

KEYWORDS

1. Introduction

The prominent advancement in Handwritten Character Recognition has become possible for Neural Network by learning distinguished features from a large amount of entitled data [1] . This system is the base for many different types of applications in various fields such as educations (digital dictionaries), businesses, post offices, banks (Handwritten Courtesy Amount), security systems, and even the field of robotics. Handwritten character recognition is troublesome because of the great variations of handwritings, different size (length and height) and orientation angle of the characters [2] . Signature is well accepted legal biometric used universally. The Signature recognition technique is widely used in commercial applications. Personal authentication is dependent on handwritten signature [3] [4] [5] [6] . So the proposed system has been designed for signature and handwritten character recognition.

There are two types of authentication methods: Online and Offline [7] . This system works offline based signatures that rely on scanned images of the individual signatures. Signature recognition is widely used in commercial applications. Both the signature verification and character recognition consist of Data Capture, Preprocessing, Feature extraction, Experimentation, Performance evaluation etc. To achieve high recognition performance, an efficient feature extraction method is selected for both handwritten character and signature recognition. In the backend, an artificial neural network is used for performing classification and recognition tasks. In the offline recognition system, the neural networks have emerged as the fast and most reliable tools for classification towards achieving high recognition accuracy. In the literature, several feature extraction techniques have been proposed for signature verification [8] . The feature extraction procedure is repeated for all the blocks leading to extraction of 52 features for each character. These extracted features are used to train a feed-forward BPNN that is employed for performing classification and recognition tasks.

The signature verification is mostly used in bank check. Bank check contains the signature and handwritten characters. Computer systems are slower and yield less appropriate results than humans in the processing of handwritten fields [9] . In the bank check processing, handwritten text and signature are important impediments towards the automation [10] . Segmentation of amount into singular digit is the most ticklish task in check processing [11] . In handwritten numeral recognition system, the segmentation of connected numbers is the main congestion [12] . There are studies in the literature on recognition of the characters and signatures given on the paper [13] - [18] . Several research papers have been studied in the field of character recognition system [19] [20] . This paper has tried to proclaim some of the most prominent directions of recent research in the field of character and signature verification.

The steps involved in this research are approximately given as follows. Section 2 describes the methods and problem to be solved. Section 3 explains the proposed system. Section 4 discusses the implementation, results, and performance of the proposed system. Section 5 discusses the conclusion.

2. Related Work

This section involves the work done by various researchers in the field of Handwritten character Recognition and Signature Verification. V. Patil et al. [2] proposed a system that creates a character matrix and suitable network structure for character recognition. The experimental result of this system yields good recognition accuracy of more than 70%. According to [6] , an efficient method has been shown for signature separation from the nonhomogeneous noisy background. The shape and density features extraction methods are used to introduce a solution to the complication of simulated signature verification in off-line systems. M. B. A. Miah et al. [10] came up with a technique for recognition of numerical digits and signatures. The overall success rate for numerical digit recognition is about 93.4% and for the signature recognition, it is 96%. A technique is introduced in the paper [12] that deals with automatic segmentation of unconstrained handwritten connected numerals. S. K. Dewangan et al. [13] introduced an approach for biometric authentication by using Neural Network for electronically captured signatures. S. A. Dey et al. [14] discussed a new method for feeding disapproved characters back into the segmentation process for performing error recovery. Such feedback results in an increase in the probability of recognition for each character. M. S. Shah et al. [18] demonstrated a back-propagation learning algorithm to detect handwritten courtesy amount automatically.

This research emphasizes increasing the accuracy of handwritten numeral recognition, alphabetic character recognition and signature recognition and verification. The accuracy for recognition and verification has been improved remarkably by using distinguished feature extraction methodologies after training and testing by Neural Network.

3. Methods of Recognition

The character and signature recognition and verification systems are used in several fields of technology. A bank check consists of many fields such as the courtesy amount and the signature of the person who wrote the check as well as symbols and graphics. On the other hand, the character recognition is used by OCR, digital dictionaries. Here digital dictionary is a concept device used to optically detect handwritten words to show their definitions.

Actually, handwritten signature recognition can be of two kinds:

1) Online verification: it needs a device that is connected to the remote computer to right the running signature verification. A stylus is needed to sign on an electronic tablet to acquire the dynamic signature information [21] [22] .

2) Offline verification: The user does not need to be there for verification. There is some compatibility in a fixed signature. The most commonly applied of fixed signature in document verify the banking system. Fixed signature has fewer features so it has to be more careful achieving the 100% accuracy is the feature of signature verification.

4. The Proposed System

The proposed system is a model for detecting the characters and letters and identifying the signatures of correct persons. The system consists of four modules:

1) Image preprocessing

2) Extraction of characters and signature

3) Segmentation of digits and letters

4) Recognition using a neural network

Figure 1 shows the architecture of overall system.

4.1. Image Acquisition

The process of obtaining a digitized image from real-world source is image acquisition. It can be done using several devices such as the scanner, digital camera, PDA, web camera, camcorder etc. [23] . We used scanner to acquire the items. The scanned signature is shown in Figure 2 below.

Figure 1. Proposed system block diagram.

(a) (b) (c)

Figure 2. Scanned image of (a) Signature, (b) Alphabetic Characters, (c) Numeric characters.

4.2. Image Preprocessing

The contrivance of this designed system is to perceive any signature. Here these following steps are coursed in Figure 3.

4.2.1. Image conversion

We converted RGB images into a grayscale image using NTSC grayscale conversion that converts RGB to grayscale conversion. The equation is shown below:

Grayscale value = 0.3 × Red + 0.59 × Green + 0.11 × Blue

4.2.2. Filtering

The background of the scanned image of signatures and digits are blurred by using a 3-by-3 median filter.

4.3. Extraction of Character and Signature

The signatures and characters are selected manually from the filtered image. It can be done also by setting the rectangular area in predefined function to auto select.

4.4. Binarization

The signature and character are converted to binary image from grayscale and so the image is converted with two types of pixels 0's (white) and 1's (black). The binary image is shown below in Figure 4.

Remove the Unnecessary Portion

After converting the image into a binary image we removed the unwanted pixels (0) and resized the image.

Figure 3. Image preprocessing steps block diagram.

(a) (b) (c)

Figure 4. Binary image (a) Signature, (b) Handwritten digit, (c) Handwritten letter.

4.5. Segmentation

The segmentation strategy lies in the determination of the best cut path to recognize a correct isolated character. We mainly worked on the segmentation-based recognition technique. We used an algorithm that represents the total number of white pixels in vertical direction of the binary image so that the text region can be separated easily. The segmented digits and letters are shown in Figure 5.

4.6. Feature Extraction

A rotation and size independent feature extraction methods are used to extract the feature of the segmented digit and signature and obtain 44 features for each digitized signature.

Center of the image

Center of the image can be obtained by using following two equations

Center_x = width/2 (1)

Center_y = height/2 (2)

Feature 1-38

These features emphasize at checking how the black pixels are allocated in the image. At first, the total number of pixels of the image is calculated that is total_pixels of images.

Total_pixels = height × weight (3)

The percentage of black pixels in the upper and lower areas of the images is defined as Feature 1 and Feature 2 respectively. They are also called pixels located at up and down the central point.

feature 1 = up_pixels/total_pixels (4)

feature 2 = down_pixels/total_pixels (5)

Like the arithmetic equations used above Feature 3 and Feature 4 represent the percentage of black pixels located in the left and right areas of the image, in other words, the pixels located in the left and right of the central point.

feature 3 = left_pixels/total_pixels (6)

feature 4 = right_pixels/total_pixels (7)

(a)(b)

Figure 5. Segmented (a) digits; (b) letters.

Now partition the image into four sub-regions and calculate the percentage of black pixels locate in every region. Then again we subdivide every region into four and calculate the percentage of black pixels of those regions.

feature_{n} = sub_area_pixels_{n}/total_pixels (8)

where n = 5 to 24.

In order to extract the features from 25 to 30, we need to consider the 16 (4 × 4) subregions or blocks.

feature 25 = total number of black pixels from block (0,0) to (3,3)/Total_pixels(9)

feature 26 = total number of black pixels from block (1,0) to (4,3)/Total_pixels(10)

feature 27 = total number of black pixels from block (0,1) to (3,4)/Total_pixels(11)

feature 28 = total number of black pixels from block (1,1) to (4,4)/Total_pixels(12)

feature 29 = total black pixels of 2nd and 3rd rows of blocks/ Total_pixels(13)

feature 30 = total black pixels of 2nd and 3rd columns of blocks/Total_pixels(14)

feature 31 = total black pixels of 1st row of blocks/Total_pixels (15)

feature 32 = total black pixels of 2nd row of blocks/Total_pixels (16)

feature 33 = total black pixels of 3rd row of blocks/Total_pixels (17)

feature 34 = total black pixels of 4th row of blocks/Total_pixels (18)

feature 35 = total black pixels of 1st column of blocks /Total_pixels (19)

feature 36 = total black pixels of 2nd column of blocks/Total_pixels (20)

feature 37 = total black pixels of 3rd column of blocks/Total_pixels (21)

feature 38 = total black pixels of 4th column of blocks/Total_pixels (22)

Feature 39

The feature 39 is the average of the distance between all the black pixels and the central point.

$\text{feature}\text{\hspace{0.17em}}39=\frac{1}{\text{Total\_pixels}}\times {\displaystyle {\sum}_{y}{\displaystyle {\sum}_{x}\sqrt{{\left(x-i\right)}^{2}\times {\left(y-j\right)}^{2}}}}$ (23)

where (i, j) are the coordinates of a point and (x, y) are the coordinates of the central point.

Feature 40-46

These features are used to generate the seven moments of the image. These are well-known as Hu moment invariants. We calculated the central movements of the segmented signature. For f(x, y) 2-D function of M × N binary image, the moment of order (p + q) is defined by:

${m}_{pq}={\displaystyle {\sum}_{x=1}^{M}{\displaystyle {\sum}_{y=1}^{N}{\left(x\right)}^{p}{\left(y\right)}^{q}f\left(x,y\right)}}$ (24)

where $p,q=0,1,2,3,\cdots $ .

Central moments are obtained by the following equations:

${\mu}_{pq}={\displaystyle {\sum}_{x}{\displaystyle {\sum}_{y}{\left(x-\stackrel{\xaf}{x}\right)}^{p}{\left(y-\stackrel{\xaf}{y}\right)}^{q}f\left(x,y\right)}}$ (25)

where $\stackrel{\xaf}{x}=\frac{{m}_{10}}{{m}_{00}}$ and $\stackrel{\xaf}{y}=\frac{{m}_{01}}{{m}_{00}}$ .

For scaling normalization the central moment changes as following equations:

${\eta}_{pq}=\frac{{\mu}_{pq}}{{\mu}_{00}^{\gamma}}$ (26)

where $\gamma =\left[\frac{\left(p+q\right)}{2}\right]+1$ .

Seven values, enumerated by normalizing central moments through order three, that are invariant to object scale, position, and orientation. In terms of central moments, the seven moments are given as,

${M}_{1}={\eta}_{20}+{\eta}_{02}$ (27)

${M}_{2}={\left({\eta}_{20}-{\eta}_{02}\right)}^{2}+4{\eta}_{11}^{2}$ (28)

${M}_{3}={\left({\eta}_{30}-3{\eta}_{12}\right)}^{2}+{\left(3{\eta}_{21}-{\eta}_{03}\right)}^{2}$ (29)

${M}_{4}={\left({\eta}_{30}+{\eta}_{12}\right)}^{2}+{\left({\eta}_{21}+{\eta}_{03}\right)}^{2}$ (30)

$\begin{array}{c}{M}_{5}=\left({\eta}_{30}-3{\eta}_{12}\right)\left({\eta}_{30}+{\eta}_{12}\right)\left[{\left({\eta}_{30}+3{\eta}_{12}\right)}^{2}-3{\left({\eta}_{21}+{\eta}_{03}\right)}^{2}\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\left(3{\eta}_{21}-{\eta}_{03}\right)\left({\eta}_{21}+{\eta}_{03}\right)\left[3{\left({\eta}_{30}+3{\eta}_{12}\right)}^{2}-{\left({\eta}_{21}+{\eta}_{03}\right)}^{2}\right]\end{array}$ (31)

${M}_{6}=\left({\eta}_{20}-{\eta}_{02}\right)\left[{\left({\eta}_{30}+{\eta}_{12}\right)}^{2}-{\left({\eta}_{21}+{\eta}_{03}\right)}^{2}\right]+4{\eta}_{11}\left({\eta}_{30}+{\eta}_{12}\right)\left({\eta}_{21}+{\eta}_{03}\right)$ (32)

$\begin{array}{c}{M}_{7}=\left(3{\eta}_{21}-{\eta}_{03}\right)\left({\eta}_{30}+{\eta}_{12}\right)\left[{\left({\eta}_{30}+{\eta}_{12}\right)}^{2}-3{\left({\eta}_{21}+{\eta}_{03}\right)}^{2}\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}-\left({\eta}_{30}+3{\eta}_{12}\right)\left({\eta}_{21}+{\eta}_{03}\right)\left[3{\left({\eta}_{30}+3{\eta}_{12}\right)}^{2}-{\left({\eta}_{21}+{\eta}_{03}\right)}^{2}\right]\end{array}$ (33)

Feature 47-52

Feature 47 to feature 52 show the mean of Major Axis Length, Minor Axis Length, Contrast, Homogeneity, Correlation, and Energy respectively.

4.7. Neural Network Recognition

A neural network is comprised of a number of leaves called neurons joint by links. Every link has a numeric heft. Neurons are the fundamental construction blocks of a neural network.

4.7.1. Neural Network Design

A neural network is appointed for signature recognition. For this purpose a multilayer feed forward neural network with administered learning method is much feasible and productive. The network implements back propagation-learning algorithm that is a meticulous technique for training different layer ANNs. The Network design for the system is shown in Figure 6.

4.7.2. Settings of the Parameters

Neural Network has numerous parameters i.e. learning Rate Parameter (η), Weights (w), and Momentum (α). Learning rate is schemed to ordain how much the link weights and bias can be revised based on the direction and change rate. Its value must be in the range 0 to 1. For a conferred neuron the learning rate can be picked inversely proportional to the square root of the number of synaptic conception to the neuron.

The weights of the network to be accommodated by using back-propagation algorithm must be initialized by some non-zero values. Initialize weights are randomly chosen between [−0.5, +0.5] or [−0.1, +0.1]. The weight alteration is performed using the following equation.

$\Delta w\left(n\right)=\eta \cdot \delta \left(n\right)\cdot y\left(n\right)+\alpha \Delta w\left(n-1\right)$ (34)

Momentum (α) range 0 to 1 but in 0.9, found to be most applicable for most application. Table 1 shows the Initial value of network parameter.

Figure 6. Network design for the system.

Table 1. Initial value of network parameter.

5. Performance Analysis and Empirical Result

The implementation of performance analysis is done after testing the result of this experiment. The MATLAB has been used to implement the proposed system. MATLAB can be used in Algorithm development, math and computation, Data analysis, exploration, and visualization, Modeling, simulation, and prototyping, Scientific and engineering graphics and Application development. Each digit is converted into 20 × 15 binary images to be tested by the neural network after the extraction and segmentation of the scanned image, we split the samples into two groups: the training set and the test set. The training set contains 60% of the total genuine samples and the remaining are used for testing the system.

The method has been tested with 10 types of handwritten digits and 52 types of alphabets. Every digit and alphabet has ten samples. We collected signature from 30 different persons and used ten samples for a single person. The recognition of signature is 100% if the train and test set is same. For digit recognition or accuracy rate is above 98.1% and for English alphabets, the matching or accuracy rate is above 97.31%. The accuracy rate for signature is above 97.6% and the error rate is only 2.4%. The overall dataset of recognition rates of test data is shown below in Tables 2-4.

The final training performance is shown below in Figure 7.

Table 2. Overall recognition rate for handwritten digits.

Table 3. Overall recognition rate for handwritten alphabets.

Table 4. Overall recognition rate for handwritten signatures.

Figure 7. Performance of the system.

6. Conclusion

The proposed system has developed a method for Recognition handwritten characters (both numerical and alphabetic) and signatures. A neural network is designed to test 10 samples for each type of characters and 10 samples for each signature. These samples are of different types and each of them shows the percentage of matching/acceptance rate. Acceptance rate depends on the appropriate training sample. On average, the success rate of Numerical Character Recognition and Verification System (NCRVS) is 98.1%; the success rate of Alphabetic Character Recognition and Verification System (ACRVS) is 97.31%; and for Signature Recognition and Verification System (SRVS), it is 97.6% which meets the expectation of the research. This thesis mainly aims at reducing the cases of fraud in commercial transactions.

Conflicts of Interest

The authors declare no conflicts of interest.

Cite this paper

*Journal of Information Security*,

**9**, 209-224. doi: 10.4236/jis.2018.93015.

[1] | Zhang, Y., Liang, S., Nie, S., Liu, W. and Peng, S. (2018) Robust Offline Handwritten Character Recognition through Exploring Writer-Independent Features under the Guidance of Printed Data. Pattern Recognition Letters, 106, 20-26. https://doi.org/10.1016/j.patrec.2018.02.006 |

[2] | Patil, V. and Shimpi, S. (2011) Handwritten English Character Recognition Using Neural Network. Elixir International Journal: Computer Science and Engineering, 41, 5587-5591. |

[3] | Yeung, D.-Y., et al. (2004) SVC2004: First International Signature Verification Competition. Biometric Authentication, Springer, 16-22. |

[4] | Drouhard, J.-P., Sabourin, R. and Godbout, M. (1994) Evaluation of a Training Method and of Various Rejection Criteria for a Neural Network Classifier Used for Off-Line Signature Verification. IEEE International Conference on Neural Networks, IEEE World Congress on Computational Intelligence, 7, 4294-4299. https://doi.org/10.1109/ICNN.1994.374957 |

[5] |
Leclerc, F. and Plamondon, R. (1994) Automatic Signature Verification: The State of the Art—1989-1993. International Journal of Pattern Recognition and Artificial Intelligence, 8, 643-660. https://doi.org/10.1142/S0218001494000346 |

[6] |
Ammar, M., Yoshida, Y. and Fukumura, T. (1988) Off-Line Preprocessing and Verification of Signatures. International Journal of Pattern Recognition and Artificial Intelligence, 2, 589-602. https://doi.org/10.1142/S0218001488000376 |

[7] |
Jain, A.K., Griess, F.D. and Connell, S.D. (2002) On-Line Signature Verification. Pattern Recognition, 35, 2963-2972. https://doi.org/10.1016/S0031-3203(01)00240-0 |

[8] | Impedovo, D. and Pirlo, G. (2008) Automatic Signature Verification: The State of the Art. IEEE Transactions on Systems Man and Cybernetics Part C (Applications and Reviews), 38, 609-635. |

[9] | Bose, N.K. and Liang, P. (1996) Neural Network Fundamentals with Graphs, Algorithms and Applications. McGraw-Hill Series in Electrical and Computer Engineering. |

[10] | Miah, M.B.A., Yousuf, M.A., Mia, M.S. and Miya, M.P. (2015) Handwritten Courtesy Amount and Signature Recognition on Bank Cheque Using Neural Network. International Journal of Computer Applications, 118, No. 5. |

[11] |
Palacios, R., Gupta, A. and Wang, P.S. (2003) Feedback-Based Architecture for Reading Courtesy Amounts on Checks. Journal of Electronic Imaging, 12, 194-203. https://doi.org/10.1117/1.1526105 |

[12] |
Pal, U., Belaıd, A. and Choisy, C. (2003) Touching Numeral Segmentation Using Water Reservoir Concept. Pattern Recognition Letters, 24, 261-272. https://doi.org/10.1016/S0167-8655(02)00240-4 |

[13] | Dewangan, S.K. (2013) Real Time Recognition of Handwritten Devnagari Signatures without Segmentation Using Artificial Neural Network. International Journal of Image, Graphics and Signal Processing, 5, 30. https://doi.org/10.5815/ijigsp.2013.04.04 |

[14] | Dey, S.A. (1999) Adding Feedback to Improve Segmentation and Recognition of Handwritten Numerals. PhD Thesis, Massachusetts Institute of Technology. |

[15] |
Guillevic, D. and Suen, C.Y. (1998) Recognition of Legal Amounts on Bank Cheques. Pattern Analysis and Applications, 1, 28-41. https://doi.org/10.1007/BF01238024 |

[16] | Kaufmann, G. and Bunke, H. (1998) A System for the Automated Reading of Check Amounts-Some Key Ideas. International Workshop on Document Analysis Systems, Nagano, 4-6 November 1998, 188-200. |

[17] | Molla, M.K.I. and Talukder, K.H. (2002) Bangla Number Extraction and Recognition from Document Image. 5th ICCIT, Dhaka, 27-28 December 2002, 200-206. |

[18] | Shah, M.S., Haque, S.A., Islam, M.R., Ali, M.A. and Shabbir, M. (2010) Automatic Recognition of Handwritten Bangla Courtesy Amount on Bank Checks. International Journal of Computer Science and Network Solutions, 10, 154-163. |

[19] |
Lethelier, E., Leroux, M. and Gilloux, M. (1995) An Automatic Reading System for Handwritten Numeral Amounts on French Checks. Proceedings of the 3rd International Conference on Document Analysis and Recognition, Montreal, 14-16 August 1995, Vol. 1, 92-97. https://doi.org/10.1109/ICDAR.1995.598951 |

[20] | Mashiyat, A.S., Mehadi, A.S. and Talukder, K.H. (2004) Bangla Off-Line Handwritten Character Recognition Using Superimposed Matrices. 7th International Conference on Computer and Information Technology, Dhaka, 26-28 December 2004, 610-614. |

[21] |
Brocklehurst, E.R. (1985) Computer Methods of Signature Verification. Journal of the Forensic Science Society, 25, 445-457. https://doi.org/10.1016/S0015-7368(85)72433-4 |

[22] |
Qi, Y. and Hunt, B.R. (1994) Signature Verification Using Global and Grid Features. Pattern Recognition, 27, 1621-1629. https://doi.org/10.1016/0031-3203(94)90081-7 |

[23] | Miah, M.B.A., Haque, S.A., Rashed Mazumder, M. and Rahman, Z. (2011) A New Approach for Recognition of Holistic Bangla Word Using Neural Network. International Journal of Data Warehousing and Mining, 1, 139-141. |

Copyright © 2018 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.