Research on Stitching Algorithm Based on Tree Branch Image

Abstract

Branch identification technology is a key technology to achieve automated pruning of fruit tree branches, and one of its technical bottlenecks lies in the stitching of branch images. To this end, we propose a set of branch image stitching technology algorithms. The algorithm is based on the grey-scale prime centroid method to determine the detection feature points, and uses the coordinate transformation matrix H of the corresponding points of the image to carry out the image geometric transformation, and realises the feature matching through sample comparison and classification methods. The experimental results show that the matched point images are more correct and less time-consuming.

Share and Cite:

Huang, B. and Zou, S. (2023) Research on Stitching Algorithm Based on Tree Branch Image. World Journal of Engineering and Technology, 11, 381-388. doi: 10.4236/wjet.2023.112027.

1. Introduction

Image stitching is the technique of stitching several images with overlapping parts (which may be acquired at different times, from different perspectives or from different sensors) into a seamless panoramic or high-resolution image. Lowe D G proposed the SIFT (Scale Invariant Feature Transform) algorithm in the late twentieth century [1] . In the twenty-first century, image stitching techniques were rapidly developed. Lensch, H. P. A. et al. obtained surface images of the corresponding 3D models from multiple photographic images, by comparing a stitched texture with an already registered image, thus achieving global multi-view optimisation [2] . Kwatra, V. et al. proposed a new algorithm for image and video texture synthesis. The size of the patches for their algorithm is not pre-selected, but instead uses a graphical cutting technique to determine the optimal patch region for any given offset between the input and output textures [3] . Bria, A. et al. propose a free, fully automated 3D stitching tool with micron-level resolution [4] . Zaragoza, J. et al. propose a new estimation technique called Moving Direct Linear Transform (Moving DLT), capable of adjusting or fine-tuning to accommodate deviations of the input data from the idealised conditions. Achieve accurate alignment of multiple images to create large panoramas [5] . Zhang, F. et al. proposed a local stitching method for handling parallax based on the observation that the input images do not need to be perfectly aligned over the entire overlapping area to be stitched [6] . Zhang, J. et al. proposed a histogram matching and scale invariant feature transform (SIFT) algorithm based on Chen, J. et al. proposed an image stitching algorithm based on the optimal seam algorithm and half-projection meridian, which can effectively preserve the original information of the image and obtain the ideal stitching effect [7] . Lin, M. Y. et al. designed a multi-plane alignment algorithm guided by disparity maps for images with large parallax. The concept of average tolerable parallax was proposed to help distinguish between a background plane and multiple foreground objects in an image [8] . Liu, Y. et al. proposed an improved feature point pair cleaning algorithm based on SIFT (Scale invariant feature transform) using a K-nearest neighbor-based feature point matching algorithm for rough matching. Robustness tests were performed using the RANSAC (Random Sample Consensus) method to eliminate mismatched point pairs and reduce the mismatch rate [9] . Nie, L. et al. proposed an image stitching learning framework consisting of a multi-scale depth isomorphism module and an edge-preserving deformation module [10] . Zhang, G. X. et al. proposed a stitching algorithm for UAV aerial images based on semantic segmentation and ORB to improve the stitching quality of UAV low-altitude aerial images [11] . Zhang, Z. Y. et al. used a Gaussian pyramid to improve a simple ORB-oriented algorithm with a stitching speed about 10 times faster than SIFT [12] . Chai, X. C. et al. used an up-scaling-down image sampling procedure to obtain pixel-level optimal stitching lines for large remote sensing images [13] . Kang, Y. et al. proposed a novel multi-view X-ray digital imaging stitching algorithm (MVS) based on a CdZnTe photon-counting linear array detector to solve the problem of sector-beam X-ray stitching distortion [14] . From the existing research, most of the research is applied to industry, and there are few studies in agriculture, for example, few studies on the stitching technology of fruit tree branches and trunks have been reported. Due to the complex background of branch and trunk images, the existing studies cannot accurately complete branch and trunk image stitching. Therefore, this paper proposes a set of fruit tree branch image recognition and stitching method based on branch and trunk features.

2. Image Stitching Methods

Image stitching refers to the integration of images from different views of the same scene using computer software [15] . We divide image stitching into four steps:

1) Pre-processing of the image to be stitched;

2) Extraction of features of the image to be stitched;

3) Performing feature matching;

4) Establishing a mapping relationship and using image fusion techniques to complete the seamless stitching of the image.

2.1. Determination of Detection Feature Points

This study uses a grayscale center-of-mass based FAST algorithm to detect feature points and then picks N feature points using the Harris corner point metric. The principle is to pick a point P to be detected from the graph and use IP to represent the grey value of point P; and compare it with the grey value of sixteen pixel points on the edge of a circular neighbourhood whose neighbourhood radius is three. This is shown in Figure 1. If there are n pixel points that all have a grey value greater or less than IP, then the point P is identified as a feature point.

Suppose that a feature point in the P (x, y) image is used. Then the moments of the neighbourhood pixels of the point P (x, y) are shown in Equation (1):

m i , j = x = r r y = r r x i y j I ( x , y ) (1)

where I ( x , y ) is the point P ( x , y ) , y) is the grey scale value at the point i , j ( 0 , 1 ) , r = 3 is the radius of the neighbourhood.

The centre of mass of the image is shown in Equation (2):

C = ( m 10 m 00 m 01 m 00 ) (2)

Orientation of the FAST feature points as in Equation (3)

θ = arctan ( m 01 m 10 ) (3)

2.2. Geometric Transformation of Images

In the process of acquiring images several times, the geometry of the captured images also varies considerably due to differences in the angle of capture, which requires image geometry transformation by means of a solution such as the coordinate transformation matrix H that can be used to benefit the corresponding point of the image. As shown in (4).

H = [ h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33 ] (4)

Figure 1. Schematic diagram of extracting feature points.

Make I ( x , y ) be the original image, and I ( x , y ) be the transformed image, and x , y be the horizontal and vertical coordinates of the image, then

I ( x , y ) = [ h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33 ] I ( x , y ) (5)

We use a projection transformation, i.e. the projection of an image from one coordinate system plane to other coordinate system planes by means of a transformation matrix. The transformation matrix is usually such that the equation in (5) h 33 is equal to 1, as shown in (6).

H = [ h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 1 ] (6)

As can be seen from Equation (6), 8 parameters need to be found, i.e. only 4 pairs of matching points need to be involved in the solution.

2.3. Feature Matching Algorithm

Image matching is a particularly critical step in image stitching. The correct matching rate will lead to a good or bad final stitching result. We use the KNN algorithm . The initial principle is that searches for K samples that are most similar to the data to be detected on a known dataset, and then finds the category with the highest proportion of classifications from these K most similar samples as the category of the data to be detected.

When applying the idea of the KNN algorithm to feature point matching of images, the value of K is usually designed to be 2. Firstly, for a feature point x in the reference image, go to the set of feature points of the image to be aligned and calculate the distance between each feature point in the set and x (if the feature descriptor used is binary then calculate the Hamming distance between the two descriptors dpq as shown in Equation (7), if not then calculate the two descriptors the Euclidean distance Dpq as shown in Equation (7). This returns the point m with the smallest distance from point x and the point n with the 2nd smallest distance when K equals 2. Let the distance between x and m be Dxm and the distance between x and n be Dxn. Then calculate the ratio of the distances T = D x m / D x n ; if T < 0.75, the point x is successfully matched with point m, otherwise the match fails.

d p q = ( p q ) = i N p i q i (7)

where represents the dissimilarity, p and q represent the descriptors of the feature points, i represents the i-th position of the descriptor, and N represents the dimensionality of the descriptors of the feature points.

D p q ( p , q ) = i = 1 N ( p i q i ) 2

where p and q represent the descriptors of the feature points, i represents the i-th position of the descriptor, and N represents the dimensionality of the descriptors of the feature points.

3. Experiments and Analysis

Experiments with Image Suturing of Branches

We used the FAST algorithm to detect the feature points, utilising the coordinate transformation matrix H of the corresponding points of the image for image geometry transformation. Applying the ideas of the KNN algorithm to feature point matching of images, a total of 27426 feature points were obtained for the 2 images, with 3130 matching points corresponding to the requirements. The coordinates of the feature points are shown in Figure 2 and the coordinates of the matching points are shown in Figure 3.

By comparative analysis, the original image (shown in Figure 4) has up to 95% correct matching point images, where the size of the participating stitching image is 1800 × 1200, the total matching points are 3130 and the correct matching points are 3070, and the matching point correspondence diagram is shown in Figure 5.

By matching feature points with the algorithm in section 2.3 and using pixel-level fusion i.e. processing at the data level, the final image is obtained as shown in Figure 6.

Figure 2. Coordinate map of image feature points.

Figure 3. Coordinate map of matching points.

Figure 4. Stitching the original image.

Figure 5. Matching point correspondence diagram.

Figure 6. Experimental results of fused images.

4. Conclusions

This paper presents an image stitching algorithm based on fruit tree branches and trunks with high recognition rate and short time consumption. The algorithm uses the grey-scale prime centroid method to determine the detection feature points, the image geometry transformation by the coordinate transformation matrix H of the corresponding points of the image, and the KNN algorithm to achieve feature matching. Through comparative analysis, the correct matching point image is up to 95%, which meets the production requirements and can be used as an important basis for automatic branch pruning technology.

The image stitching algorithm proposed in this paper is fast in feature point extraction, but the scale invariance of the ORB is relatively poor. Subsequent research should break through or improve the shortcomings of the poor scale invariance of the ORB and improve the overall efficiency of the image stitching algorithm.

Acknowledgements

This research was supported by the Science and Technology Plan Project of Guizhou Province (Grant No. QKHJC [2019]1152) and High-level Talents Research Initiation Fund of Guizhou Institute of Technology (Grant No. XJGC20190927).

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Lowe, D.G. (1999) Object Recognition from Scale-Invariant Keypoints. Proceedings of the 7th IEEE International Conference on Computer Vision, Kerkyra, 20-27 September 1999, 1150-1157.
https://doi.org/10.1109/ICCV.1999.790410
[2] Lensch, H.P.A., Heidrich, W. and Seidel, H.P. (2000) Automated Texture Registration and Stitching for Real World Models. Proceedings the 8th Pacific Conference on Computer Graphics and Applications, Hong Kong, 5 October 2000, 317-452.
[3] Kwatra, V., Schodl, A., Essa, I., Turk, G. and Bobick, A. (2003) Graphicut Textures: Image and Video Synthesis Using Graph Cuts. ACM Transactions on Graphics, 22, 277-286.
https://doi.org/10.1145/882262.882264
[4] Bria, A. and Iannello, G. (2012) Tera Stitcher—A Tool for Fast Automatic 3D- Stitching of Teravoxel-Sized Microscopy Images. BMC Bioinformatics, 13, 15.
https://doi.org/10.1186/1471-2105-13-316
[5] Zaragoza, J., Chin, T.J., Tran, Q.H., Brown, M.S. and Suter, D. (2014) As-Projec- tive-As-Possible Image Stitching with Moving DLT. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36, 1285-1298.
https://doi.org/10.1109/TPAMI.2013.247
[6] Zhang, F. and Liu, F. (2014) Parallax-Tolerant Image Stitching. 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, 23-28 June 2014, 3262-3269.
https://doi.org/10.1109/CVPR.2014.423
[7] Zhang, J., Chen, G.X. and Jia, Z.Y. (2017) An Image Stitching Algorithm Based on Histogram Matching and SIFT Algorithm. International Journal of Pattern Recognition and Artificial Intelligence, 31, 1754006:1-1754006:14.
https://doi.org/10.1142/S0218001417540064
[8] Chen, J., Li, Z.X., Peng, C.L., Wang, Y. and Gong, W.P. (2022) UAV Image Stitching Based on Optimal Seam and Half-Projective Warp. Remote Sensing, 14, Article 1068.
https://doi.org/10.3390/rs14051068
[9] Liu, Y., Tian, J.W., Hu, R.R., Yang, B., Liu, S., Yin, L.R. and Zheng, W.F. (2022) Improved Feature Point Pair Purification Algorithm Based on SIFT During Endoscope Image Stitching. Frontiers in Neurorobotics, 16, Article ID: 840594.
https://doi.org/10.3389/fnbot.2022.840594
[10] Nie, L., Lin, C.Y., Liao, K. and Zhao, Y. (2022) Learning Edge-Preserved Image Stitching from Multi-Scale Deep Homography. Neurocomputing, 491, 533-543.
https://doi.org/10.1016/j.neucom.2021.12.032
[11] Zhang, G.X., Qin, D.Y., Yang, J.Q., Yan, M.Y., Tang, H.P., Bie, H.Z. and Ma, L. (2022) UAV Low-Altitude Aerial Image Stitching Based on Semantic Segmentation and ORB Algorithm for Urban Traffic. Remote Sensing, 14, Article 6013.
https://doi.org/10.3390/rs14236013
[12] Zhang, Z.Y., Wang, L.X., Zheng, W.F., Yin, L.R., Hu, R.R. and Yang, B. (2022) Endoscope Image Mosaic Based on Pyramid ORB. Biomedical Signal Processing and Control, 71, Article ID: 103261.
https://doi.org/10.1016/j.bspc.2021.103261
[13] Chai, X.C., Chen, J.Y., Mao, Z.H. and Zhu, Q.K. (2023) An Upscaling-Downscaling Optimal Seamline Detection Algorithm for Very Large Remote Sensing Image Mosaicking. Remote Sensing, 15, Article 89.
https://doi.org/10.3390/rs15010089
[14] Kang, Y., Wu, R., Wu, S., Li, P.Z., Li, Q.P., Cao, K., Tan, T.T., Li, Y.R. and Zha, G.Q. (2023) A Novel Multi-View X-Ray Digital Imaging Stitching Algorithm. Journal of X-Ray Science and Technology, 31, 153-166.
https://doi.org/10.3233/XST-221261
[15] Chen, R. (2022) Research on Image Mosaic Technology Based on Improved ORB. Ph.D. Thesis, Harbin Institute of Technology, Harbin.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.