1. Introduction
Image stitching is the technique of stitching several images with overlapping parts (which may be acquired at different times, from different perspectives or from different sensors) into a seamless panoramic or high-resolution image. Lowe D G proposed the SIFT (Scale Invariant Feature Transform) algorithm in the late twentieth century [1] . In the twenty-first century, image stitching techniques were rapidly developed. Lensch, H. P. A. et al. obtained surface images of the corresponding 3D models from multiple photographic images, by comparing a stitched texture with an already registered image, thus achieving global multi-view optimisation [2] . Kwatra, V. et al. proposed a new algorithm for image and video texture synthesis. The size of the patches for their algorithm is not pre-selected, but instead uses a graphical cutting technique to determine the optimal patch region for any given offset between the input and output textures [3] . Bria, A. et al. propose a free, fully automated 3D stitching tool with micron-level resolution [4] . Zaragoza, J. et al. propose a new estimation technique called Moving Direct Linear Transform (Moving DLT), capable of adjusting or fine-tuning to accommodate deviations of the input data from the idealised conditions. Achieve accurate alignment of multiple images to create large panoramas [5] . Zhang, F. et al. proposed a local stitching method for handling parallax based on the observation that the input images do not need to be perfectly aligned over the entire overlapping area to be stitched [6] . Zhang, J. et al. proposed a histogram matching and scale invariant feature transform (SIFT) algorithm based on Chen, J. et al. proposed an image stitching algorithm based on the optimal seam algorithm and half-projection meridian, which can effectively preserve the original information of the image and obtain the ideal stitching effect [7] . Lin, M. Y. et al. designed a multi-plane alignment algorithm guided by disparity maps for images with large parallax. The concept of average tolerable parallax was proposed to help distinguish between a background plane and multiple foreground objects in an image [8] . Liu, Y. et al. proposed an improved feature point pair cleaning algorithm based on SIFT (Scale invariant feature transform) using a K-nearest neighbor-based feature point matching algorithm for rough matching. Robustness tests were performed using the RANSAC (Random Sample Consensus) method to eliminate mismatched point pairs and reduce the mismatch rate [9] . Nie, L. et al. proposed an image stitching learning framework consisting of a multi-scale depth isomorphism module and an edge-preserving deformation module [10] . Zhang, G. X. et al. proposed a stitching algorithm for UAV aerial images based on semantic segmentation and ORB to improve the stitching quality of UAV low-altitude aerial images [11] . Zhang, Z. Y. et al. used a Gaussian pyramid to improve a simple ORB-oriented algorithm with a stitching speed about 10 times faster than SIFT [12] . Chai, X. C. et al. used an up-scaling-down image sampling procedure to obtain pixel-level optimal stitching lines for large remote sensing images [13] . Kang, Y. et al. proposed a novel multi-view X-ray digital imaging stitching algorithm (MVS) based on a CdZnTe photon-counting linear array detector to solve the problem of sector-beam X-ray stitching distortion [14] . From the existing research, most of the research is applied to industry, and there are few studies in agriculture, for example, few studies on the stitching technology of fruit tree branches and trunks have been reported. Due to the complex background of branch and trunk images, the existing studies cannot accurately complete branch and trunk image stitching. Therefore, this paper proposes a set of fruit tree branch image recognition and stitching method based on branch and trunk features.
2. Image Stitching Methods
Image stitching refers to the integration of images from different views of the same scene using computer software [15] . We divide image stitching into four steps:
1) Pre-processing of the image to be stitched;
2) Extraction of features of the image to be stitched;
3) Performing feature matching;
4) Establishing a mapping relationship and using image fusion techniques to complete the seamless stitching of the image.
2.1. Determination of Detection Feature Points
This study uses a grayscale center-of-mass based FAST algorithm to detect feature points and then picks N feature points using the Harris corner point metric. The principle is to pick a point P to be detected from the graph and use IP to represent the grey value of point P; and compare it with the grey value of sixteen pixel points on the edge of a circular neighbourhood whose neighbourhood radius is three. This is shown in Figure 1. If there are n pixel points that all have a grey value greater or less than IP, then the point P is identified as a feature point.
Suppose that a feature point in the P (x, y) image is used. Then the moments of the neighbourhood pixels of the point P (x, y) are shown in Equation (1):
(1)
where
is the point
, y) is the grey scale value at the point
,
is the radius of the neighbourhood.
The centre of mass of the image is shown in Equation (2):
(2)
Orientation of the FAST feature points as in Equation (3)
(3)
2.2. Geometric Transformation of Images
In the process of acquiring images several times, the geometry of the captured images also varies considerably due to differences in the angle of capture, which requires image geometry transformation by means of a solution such as the coordinate transformation matrix H that can be used to benefit the corresponding point of the image. As shown in (4).
(4)
Figure 1. Schematic diagram of extracting feature points.
Make
be the original image, and
be the transformed image, and
be the horizontal and vertical coordinates of the image, then
(5)
We use a projection transformation, i.e. the projection of an image from one coordinate system plane to other coordinate system planes by means of a transformation matrix. The transformation matrix is usually such that the equation in (5)
is equal to 1, as shown in (6).
(6)
As can be seen from Equation (6), 8 parameters need to be found, i.e. only 4 pairs of matching points need to be involved in the solution.
2.3. Feature Matching Algorithm
Image matching is a particularly critical step in image stitching. The correct matching rate will lead to a good or bad final stitching result. We use the KNN algorithm . The initial principle is that searches for K samples that are most similar to the data to be detected on a known dataset, and then finds the category with the highest proportion of classifications from these K most similar samples as the category of the data to be detected.
When applying the idea of the KNN algorithm to feature point matching of images, the value of K is usually designed to be 2. Firstly, for a feature point x in the reference image, go to the set of feature points of the image to be aligned and calculate the distance between each feature point in the set and x (if the feature descriptor used is binary then calculate the Hamming distance between the two descriptors dpq as shown in Equation (7), if not then calculate the two descriptors the Euclidean distance Dpq as shown in Equation (7). This returns the point m with the smallest distance from point x and the point n with the 2nd smallest distance when K equals 2. Let the distance between x and m be Dxm and the distance between x and n be Dxn. Then calculate the ratio of the distances
; if T < 0.75, the point x is successfully matched with point m, otherwise the match fails.
(7)
where
represents the dissimilarity, p and q represent the descriptors of the feature points, i represents the i-th position of the descriptor, and N represents the dimensionality of the descriptors of the feature points.
where p and q represent the descriptors of the feature points, i represents the i-th position of the descriptor, and N represents the dimensionality of the descriptors of the feature points.
3. Experiments and Analysis
Experiments with Image Suturing of Branches
We used the FAST algorithm to detect the feature points, utilising the coordinate transformation matrix H of the corresponding points of the image for image geometry transformation. Applying the ideas of the KNN algorithm to feature point matching of images, a total of 27426 feature points were obtained for the 2 images, with 3130 matching points corresponding to the requirements. The coordinates of the feature points are shown in Figure 2 and the coordinates of the matching points are shown in Figure 3.
By comparative analysis, the original image (shown in Figure 4) has up to 95% correct matching point images, where the size of the participating stitching image is 1800 × 1200, the total matching points are 3130 and the correct matching points are 3070, and the matching point correspondence diagram is shown in Figure 5.
By matching feature points with the algorithm in section 2.3 and using pixel-level fusion i.e. processing at the data level, the final image is obtained as shown in Figure 6.
Figure 2. Coordinate map of image feature points.
Figure 3. Coordinate map of matching points.
Figure 5. Matching point correspondence diagram.
Figure 6. Experimental results of fused images.
4. Conclusions
This paper presents an image stitching algorithm based on fruit tree branches and trunks with high recognition rate and short time consumption. The algorithm uses the grey-scale prime centroid method to determine the detection feature points, the image geometry transformation by the coordinate transformation matrix H of the corresponding points of the image, and the KNN algorithm to achieve feature matching. Through comparative analysis, the correct matching point image is up to 95%, which meets the production requirements and can be used as an important basis for automatic branch pruning technology.
The image stitching algorithm proposed in this paper is fast in feature point extraction, but the scale invariance of the ORB is relatively poor. Subsequent research should break through or improve the shortcomings of the poor scale invariance of the ORB and improve the overall efficiency of the image stitching algorithm.
Acknowledgements
This research was supported by the Science and Technology Plan Project of Guizhou Province (Grant No. QKHJC [2019]1152) and High-level Talents Research Initiation Fund of Guizhou Institute of Technology (Grant No. XJGC20190927).