_{1}

^{*}

Template matching is a fundamental problem in pattern recognition, which has wide applications, especially in industrial inspection. In this paper, we propose a 1-D template matching algorithm which is an alternative for 2-D full search block matching algorithms. Our approach consists of three steps. In the first step the images are converted from 2-D into 1-D by summing up the intensity values of the image in two directions horizontal and vertical. In the second step, the template matching is performed among 1-D vectors using the similarity function sum of square difference. Finally, the decision will be taken based on the value of similarity function. Transformation template image and sub-images in the source image from 2-D grey level information into 1-D information vector reduce the dimensionality of the data and accelerate the computations. Experimental results show that the computational time of the proposed approach is faster and performance is better than three basic template matching methods. Moreover, our approach is robust to detect the target object with changes of illumination in the template also when the Gaussian noise added to the source image.

One of the most important issues in image analysis is automatically extracting useful information from an image. This information must be explicit and can be used in subsequent decision making processes. The more commonly used image analysis techniques include template matching and statistical pattern recognition. We can classify the types of analysis we wish to perform according to function. There are two main types of functions we will wish to know about the scene in an image. First, we may wish to ascertain whether or not the visual appearance of objects is as it should be, i.e. we may wish to inspect the objects. The implicit assumption here is, of course, that we know what objects are in the image in the first place and approximately where they are. The second function of image analysis is location. If we don’t find out the objects, we may wish to know where they are. The location of an object requires the specification of both horizontal and vertical coordinates. The coordinates may be specified in terms of the image frame of reference where distance is specified in terms of pixels.

Many of applications of computer vision simply need to know whether an image contains some previously defined object or whether a pre-defined sub-image is contained within a source image. The sub-image is called a template and should be an ideal representation of the pattern or object which is being sought in the image. The template matching technique involves the translations of the template to every possible position in the image and the evaluation of the measure of the match between the template and the image at that position. If the similarity measure is large enough, then the object can be assumed to be present. Various difference measures have different mathematical properties, and different computational properties have been used to find the location of template in the source image. The most popular similarity measures are the sum of absolute differences (SAD), the sum of squared difference (SSD), and the normalized cross correlation (NCC). Because SAD and SSD are computationally fast and algorithms are available which make the template search process even faster, many applications of gray-level image matching use SAD or SSD measures to determine the best match. However, these measures are sensitive to outliers and are not robust to variations in the template, such as those that occur at occluding boundaries in the image. However, although the NCC measure is more accurate, it is computationally slow. It is more robust than SAD and SSD under uniform illumination changes, so the NCC measure has been widely used in object recognition and industrial inspection such as in [

Many techniques have been developed to increase the template matching process. Chen and Hung [

Techniques in class two accelerate the calculation of matching error for each search position. Instead of calculating matching error, SAD, these techniques calculates a partial matching error which need less computation than SAD and whose value is less than or equal to SAD. One simple technique in this class is to subsample the pixels in the matching blocks. For example, the partial sum of absolute difference can be calculated by using a quarter of pixels regularly subsampled in each matching block [

Reducing data in the image by converting the image from 2-D into 1-D is a new strategy in template matching introduced by [

In this section we introduce the most important three algorithms in template matching. Starting by normalized cross correlation technique is computationally expensive. But it is very robust against noise under different illumination conditions, so it has been widely used in object recognition and industrial inspection. The second is the sum of absolute difference algorithm which is best computationally than normalized cross correlation technique. But it is not robust to intensity and contrast variations, so it can be used in some applications such as feature tracking and block motion estimation in video compression. The last is the coarse-to-fine (CTF) technique which can be reduce the computational cost by using block averaging to decrease the spatial resolution of the template and the source image. It apply the low-resolution (“coarse”) template to the low-resolution source, and using the full-resolution (“fine”) template only when the coarse template’s degree of mismatch with the source is blew a given threshold.

NCC has been commonly used as a metric in pattern matching to evaluate the degree of similarity between template and blocks in source image. The main advantage of the NCC over other techniques is that it is less sensitive to linear changes in the amplitude of illumination in the two compared template and block. Furthermore, the NCC is confined in the range between −1 and 1. The setting of detecting threshold is much easier than other techniques. The NCC does not have a simple frequency domain expression. It cannot be directly computed using the more efficient fast Fourier transform in the spectral domain. Its computation time increase dramatically as the window size of the template gets larger [

In pattern matching applications, NCC working as follows: one finds an instance of a small template in a large source image by sliding the template window in a pixel by pixel basis, and computing the normalized correlation between them. The maximum value or peaks of the computed correlation values indicate the matches between a template and subimage in the source image.

given that the peak value 0.999. This value in the surface at position (200, 175) in the source image is the correct match for template T(a).

The values of NCC used for finding matches of the template

where

Direct computation of

The SAD is another commonly used similarity measure in pattern matching. According to [

Assume we have a template image

where

template in the source image. If one of the values of

The computation of

The coarse-to-fine strategy, proposed by Rosenfeld and Vander Brug [

forming a complete search in the next level resolution, one requires to only search a close vicinity of the area computed from the previous search. This sequence is iterated until the search in the original source image is searched.

The image resolution levels is based on reducing the dimensions of the image by a factor

of corresponding

To compute the computational cost for coarse-to-fine method assume that the size of the source image is

in [

template blocks in the source image, the computation overhead for each template block is

Fouda [

source image which has the same size of template from two dimensional into one dimensional information vector. The new information vector data consists of two parts. The first part is the summation of all intensity values for each column in the 2-D image (see Equation (5)). The second part is the summation of all intensity values for each row in the 2-D image (see Equation (6)). In this case we scanned the 2-D image in two directions (vertical and horizontal) to make our technique more robust against noise. The new information vector (see Equation (7)) will be used in the matching process instead of 2-D image. Subsequently, this allows the search to be performed with fewer data, while still taking all pixels intensity values into account. Secondly, the sum of squared differences is used to measure the likeness between 1-D template and all possible 1-D blocks in the source image. Another measure can be used such as sum of absolute difference or Euclidean distance between template and all possible blocks in the source. Finally, the decision will be taken based on the likeness values. The block in the source with minimum similarity value will be the best match for template in the source.

To formalize the problem, suppose that we have a source image S of size

First, we scan the template image in the vertical direction by adding up the intensity values of rows for each column. Then the first part

Next, we scan the template image in the horizontal direction by adding up the intensity values of columns for each row. Then the second part

Now, the template image

where

Secondly, for each pixel _{1} and NB_{2} by the following formulas:

and

Then a 1-D information vector is constructed also as in the template image by the following formula:

(10)

where

Thirdly, the likeness between template image and each corresponding block in the source are measured by sum of square difference distance between NT and NB. All these distances compute and store in new storage

The position

Now we discuss the computational cost considered in our proposed method. First the cost of converting the template and all blocks in the source into 1-D can be ignored in our proposed method, since for the template this need only be done once, and when we are calculating the computational cost for every position of the blocks in the source, the contribution due to summing the blocks become negligible compared with computing matrix

for some function

of matrix

take the maximum complexity for the three terms

not exceed

In this section, we show the efficiency improvement of the proposed algorithm for one dimensional based template matching. In order to make the template matching more robust the proposed method added a new feature for the vectors made in one dimensional algorithm. To compare the efficiency of the proposed algorithm, we also implement the normalized cross correlation (NCC), sum of absolute difference (SAD), coarse to fine (CTF), and one dimensional (1-D) methods. Theses algorithms were implemented in a Matlab 7.0 on a Laptop with an Intel® Core™2 Duo CPU T7500 @ 2.20 GHz and 1.99 GB RAM. Two types of images are used for the testing purpose RGB images and gray scale images. Greens image of size 300 × 500 and its noisy version as the source image is a representative for RGB case (see

Let us start by the RGB case through the greens image

The results in

Secondly the proposed algorithm was tested under the gray scale case through the two-man image

Seconds | T(a) | T(b) | T(c) | T(d) | T(e) | T(f) |
---|---|---|---|---|---|---|

NCC | 70.25 | 71.03 | 69.98 | 69.73 | 69.76 | 74.03 |

SAD | 45.61 | 46.06 | 45.92 | 45.65 | 45.98 | 45.7 |

CTF | 23.59 | 49.53 | 34.33 | 23.59 | 49.89 | 33.91 |

1-D | 8.17 | 6.41 | 8.66 | 6.65^{* } | 8.28^{* } | 6.61^{* } |

M1D | 12.36 | 11.86 | 11.72 | 12.33 | 10.22 | 11.56 |

Seconds | T(a) | T(b) | T(c) | T(d) | T(e) | T(f) |
---|---|---|---|---|---|---|

NCC | 75.49 | 74.84 | 79.67 | 74.86 | 79.73 | 78.08 |

SAD | 47.42 | 48.37 | 47.56 | 45.73 | 45.93 | 45.78 |

CTF | 23.75 | 51.15 | 35.36 | 23.64 | 51.31 | 33.94 |

1-D | 7.98 | 8.05 | 8.56 | 8.22^{* } | 6.59^{* } | 6.61^{* } |

M1D | 10.25 | 12.78 | 12.87 | 12.55 | 11.87 | 12.76 |

Seconds | T(a) | T(b) | T(c) | T(d) | T(e) | T(f) |
---|---|---|---|---|---|---|

NCC | 21.47 | 21.59 | 23.33 | 20.42 | 21.88 | 24.47 |

SAD | 7.18 | 7.28 | 7.31 | 7.2 | 7.2 | 7.19 |

CTF | 6.14 | 5.17 | 3.99 | 5.6 | 5.06 | 3.95 |

1-D | 0.87 | 0.87 | 0.89 | 0.89^{* } | 0.91^{* } | 0.89^{* } |

M1D | 1.64 | 1.66 | 1.61 | 1.59 | 1.62 | 1.61 |

Seconds | T(a) | T(b) | T(c) | T(d) | T(e) | T(f) |
---|---|---|---|---|---|---|

NCC | 23.11 | 23.64 | 22.76 | 23.47 | 22.53 | 24.76 |

SAD | 7.5 | 7.2 | 7.15 | 7.44 | 7.42 | 7.23 |

CTF | 5.76 | 5.11 | 3.92 | 5.54 | 4.98 | 4.06 |

1-D | 0.89 | 0.91 | 0.92 | 0.92^{* } | 0.92^{* } | 0.88^{* } |

M1D | 1.58 | 1.56 | 1.56 | 1.59 | 1.59 | 1.58 |

T(d), T(e), and T(f). The proposed algorithm M1D give improvement 92.21% and give a correct match under the noise conditions. In addition, the proposed algorithm gives a correct match under illumination changes it outperforms the other algorithms in the running time.

The execution time in

In this paper, we proposed a one-dimensional full searching algorithm for template matching. It depends on reducing image data form

It has been shown theoretically and experimentally that the computational cost of the proposed algorithm is