A Theoretical Comparison among Recursive Algorithms for Fast Computation of Zernike Moments Using the Concept of Time Complexity

Zernike polynomials have been used in different fields such as optics, astronomy, and digital image analysis for many years. To form these polynomials, Zernike moments are essential to be determined. One of the main issues in realizing the moments is using factorial terms in their equation which causes higher time complexity. As a solution, several methods have been presented to reduce the time complexity of these polynomials in recent years. The purpose of this research is to study several methods among the most popular recursive methods for fast Zernike computation and compare them together by a global theoretical evaluation system called worst-case time complexity. In this study, we have analyzed the selected algorithms and calculated the worst-case time complexity for each one. After that, the results are represented and explained and finally, a conclusion has been made by com-paring these criteria among the studied algorithms. According to time complexity, we have observed that although some algorithms such as Wee method and Modified Prata method were successful in having the smaller time complexities, some other approaches did not make any significant difference compared to the classical algorithm.

til the program is completed [30]. The moves are in either time unit or instruction unit. In other words, to calculate the running time of a program we should count the instructions or measure the time until the program completes its function. Numbering instructions of a program is not a standard method to evaluate time efficiency since there are differences between programming languages that the algorithm is implemented by [29]. In addition, the number of instruction lines may differ depending on the programmer choices. Measuring the time unit is not a reliable evaluation because hardware features may vary in different computers [29].
While running time is totally dependent on the program, theoretical time complexity is the number of statements executed by the algorithm on input x [31]. It is based on the raw algorithm and counts the number of the main operation until the algorithm halts [29]. Theoretical time complexity may be evaluated by the exact case, the best case, the average case, or the worst case analysis functions.
The best case time complexity, the average case time complexity, and the worst case time complexity are detected by the minimum, the average, or the worst repetition rate of the main operation respectively and the final result is a function of the size of the input string.
The worst case analysis function has been widely used to estimate time efficiency of an algorithm. The standard notation for this assessment is Big-Oh (O) notation [32]. It ignores worthless details in calculations and determines the certain time bound in which an algorithm can be completed [32] [33]. As a mathematical expression, the exact time complexity ( ) ( ) ( ) f n C g n ≤ ⋅ [34]. This situation is addressed as ( ) f n is Big-Oh of ( ) g n . Figure 1 shows the difference between the exact time complexity and the worst case time complexity, represented by a black line and a blue line respectively. As we can observe, if the exact time complexity of an algorithm equals 2 0.723 3 1.618 x x * + , the related worst case time complexity will be 3 x .
The other subject to discuss is the model of the calculations. Generally, there are two models to calculate theoretical time complexity [35]. A uniform model supposes that all operations on any size of input take the same constant time to be completed while a non-uniform model allocates a unique computation time to each operation that is executed on every single input length [35]. Uniform models are common for algorithms with low domain numbers while a non-uniform model is recommended to evaluate the algorithms that have large length inputs.
N. Bastani et al. In this study, we use the uniform model and the worst-case time complexity to evaluate time efficiency of the selected algorithms. The worst-case time complexity has been chosen since this assessment delivers us the maximum time unit taken by the algorithm. The uniform model has been selected due to low domain size of input strings used in Zernike moments. The input string is the order of the polynomial and doesn't go further than 15 in the researches [36]. In fact, the notable information is represented by moments up to order 15 [37]. This number doesn't take a considerable time complexity in computational operations and as a result, using uniform model is completely acceptable for this purpose.

Classical Method to Calculate Zernike Moments
Zernike polynomials have been introduced by Frits Zernike in 1934 [38]. They are featured by having orthogonality in a continuous circle with unit radius [39] and are able to describe every function of wavefront aberration or phase [1].
In a discontinuous environment, Zernike polynomials model wavefront re-  [42]. ε represents the error of modeling and measuring the wavefront. It is assumed that the input noise is idd random noise with a zero valued average and a constant variance [40] [43].
Radial normalization is calculated using (2), pup r r ρ = (2) r is the distance between off-center and center points. pup r is the radius of aperture.
Finally, Zernike polynomials [1] [40] are determined using (4), N is the normalization factor defined [1] [40] as follows: 0 m δ is the regular delta function. We can merge (4) and (5) After transformation to the discontinuous environment, Zernike moments are determined by (8) [1]. Negative values of m doubles the time complexity, however, it does not change the order. Consequently, we ignore negative values of this variable in the rest of the article.
To evaluate the time complexity of Zernike moments, we obtain the time complexity of radial functions at first. The main operation is different in every term of the expression (Table 1). Each term is repeated in a loop to construct the radial function. Table 2 represents the number of repetitions for each term and the related time complexity. The reason to consider multiplication as the main operation of factorial terms, and compare as the main operation of ( )   be discussed as following: To calculate ! n , the below algorithm is used: American Journal of Computational Mathematics As we can observe, we have a compare operation to check if 0 n = or 1 n = which is the main operation in the case of 0 n = or 1 n = . Then, if 1 n > the loop will start and the multiple will be the main operation.
The other term is ( ) Then, we have used i instead of m. mode 2 is the reminder when n is devided by 2 which can be ignored.

Fast Methods to Calculate Zernike Moments
In this section, we will study seven best known recursive methods that tried to reduce the time complexity of Zernike moments. We have selected recursive methods due to their success in this aim. In the following, we will consider these fast algorithms.

Kintner Method
In 1976, Kintner represented his method to calculate radial functions and used a pure recursive relationship with three terms [21]. This recursive function is represented by: And the coefficients ,1 4   is computed by a sequence progress with no excess computation. The related binary tree is provided in Figure 3.
Finally, the time complexity for the complete set of Zernike moments will be of ( ) 2 2 O M N and the exact value equals

Prata Method
In another approach, Prata offered a recursive relationship for radial functions in 1989 [22] to expand Zernike polynomials functions. In this method, the coefficients are evaluated by a 2-D integration formula which is the result of the orthogonality of the Zernike polynomials. The algorithm calculates the higher order radial functions from the lower orders using: which 1 L and 2 L are constants computed by: The high order radial functions are computed from low orders using (17) which 1 2 , L L are constants computed by (18). The algorithm can not be used in the cases 0 m = and m n = . In these cases, radial functions have to be calcu- When we consider the calculation trees of top down programming for both N. Bastani et al. Prata and Kintner method, we can observe that each element could be considered as a local formula which has to reach the lower level elements inside itself. Since, these lower-level elements are not calculated globally, they have to be re-computed for every higher level elements and this matter causes redundancy. As a result, these repeations affect the time efficiency and reduce it. For this reason, we will not continue the top down calculations in the rest of the article. In Figure 5 we have sketched the binary tree of the time complexity, considering button-up programming. As we can see, There are four multiplications as the main operation and we have: For 0 m = , we can use the last row of

Belkasim Method
In 1996, Belkasim and others [23] expanded the complex equation of Zernike moments to obtain a recursive relationship which has been represented below:

Q-Recursive Method
Chong and his colleagues recommended a method for fast calculation of Zernike moments which is known as q-recursive method [24]. This algorithm uses recursive equations to compute radial functions. The recursiveness is based on m and does not change n in the right side of equation. To determine radial functions, Q-recursive approach follows: For the ( ), The coefficients are obtained by:

Wee Method
In 2004, Wee, Paramesran, and Takeda offered an approach for the complete set of Zernike moments that is a merged approach of Kintner, Prata, and Q-recursive algorithms [25]. The main formula is the recursive formula of Prata method. However, as we know, there are some cases that Prata algorithm is unusable. In these cases, Kintner and q-recursive methods have been used instead. In Wee method, radial functions are reachable by:

Amayeh Method
Amayeh and his colleges designed an algorithm to calculate Zernike moments and claimed that their method needs less time resources than the classical approach [26]. This method uses complex relationship of Zernike moments and , m k X is identified as the common term of (44), which has a unified repetition. For example, for 10, 0 n m = = we have Table 3. The method is similar to Belkasim algorithm, As we observed previously, the bottleneck of Belkasim algorithm that has increased the time complexity, was

Modified Prata Method
Singh and Walia proposed a modification of Prata algorithm [27] and combined (17) and (18)

Results and Discussion
In this paper, we have studied several approaches which tried to decrease the time complexity of Zernike moments. We have used the worst-case time complexity criterion and the uniform model to evaluate time efficiency of the presented algorithms. As the result presented in Table 4, classical method has the worst-case time complexity of ( ) 2 4 O M N . The bottleneck of the complexity has been created by the factorial terms that must be calculated each time and some of the studied approaches tried to remove these terms.
In general, the time complexity of Kintner, Prata, Q-recursive, Wee, and modified Prata approaches are dependent on programming Style. However, as we discussed before, top-down programming causes redundancy and excess computation of the elements. The most successful approaches, in terms of time complexity order, are Kintner, Q-recursive, Wee, and modified Prata algorithms. These methods could halve the order of the time complexity.
Prata method was successful to reduce the time complexity as wel. However, the worst-case time complexity is higher than the mentioned algorithms in the previous paragraph.
Neither of Belkasim and Amayeh approaches could diminish the order of the time complexity in calculating Zernike moments. However, Belkasim method, has slightly reduced the coefficient of term with the highest order from 0.07 to 0.02 as we have calculated before. Amayeh method had an increment in the coefficient compared to the classical approach.
Therefore, the main competition is among Kintner, Prata, Q-recursive, Wee, and modified Prata algorithms. This competition is in the coefficient of term that has the largest order.
Wee and Modified Prata approaches have the smallest coefficient in their terms with the highest order which is 2.25. To have an exact comparison, we make an inequation supposing that the time complexity of Wee method is larger than the time complexity of Modified Prata algorithm and will see if our assumption Therefore if 3 N ≥ , the time complexity of Modified Prata approach is more efficient than Wee method's.
As we mentioned previously, uniform model is used to evaluate the studied algorithms in this article. Although the uniform model is a popular model to evaluate time complexities of the algorithms, one of the disadvantages of this model is supposing a uniform cost (which is time scale in this study) for all operations in every size, while different operations do not have the same cost in the binary machine. For example, if we consider binary production 1 × 1 and 11 × 1, the former takes ( ) 1 T n = and the time complexity of the latter production is ( ) 2 T n = , while the main operation is binary production in both.
This discussion leads us to consider evaluating the algorithms by logarithmic cost model which assumes that the cost of every operation is a function of the numbers of input bits (39).
The other subject to be considered is spatial complexity which is related to the memory space that algorithms take while they are running. Spatial complexity could be reached by the same computational process similar to this study but in the storage field.

Conclusions
In this study, we have evaluated seven algorithms that tried to decrease the time complexity of Zernike Moments. Our assessment is done by the worst-case time complexity criteria and the uniform cost model. To have a brief comparison between studied algorithms, the following points could be mentioned: -The algorithms that removed the factorial functions in their equations, were successful in reducing the order of the worst-case time complexity.
-Belkasim and Amayeh approaches, which had kept factorial terms in their equations, could not succeed in decreasing the order of the time complexity, American Journal of Computational Mathematics even though the coefficient of the term with the largest order has been diminished in Belkasim method. The barrier that caused these algorithms to fail, was using factorial terms in their recursive relationships.
-Both Kintner and Prata approaches have limitations in their computations.
Kintner's method is limited to 4 n m − ≥ [45]. Similarly, Prata algorithm is not usable in 0 m = and n m = and classical method must be used in these cases. However, the linear relationships, which enable us to obtain higher order moments from lower orders, may be an advantage of this method [24].
-In Q-recursive method, moments of each order are independent of moments in higher or lower orders which makes it useful for real time and parallel applications. This characteristic lets the whole set of Zernike moments of each order be separately calculated in a loop without any duplicated computations.
This characteristic can be observed by drawing the tree of time complexity related to the algorithm in which branches are sequenced instead of being parallel. -In Wee and modified Prata algorithms, factorial terms have been removed from the main equation and as a result, the efficiencies of the algorithms have improved. In these methods, factorial terms have been completely removed and are replaced with production terms, the equations have small numbers of production operations, and the relationships have changed into linear relationships. As a result, fewer computations have happened during the process. In fact, these two approaches have the least coefficient of term with the largest order among the studied algorithms. However, modified Prata algorithm has a better functionality in terms of time complexity for 3 N > . -In general, recursive approaches are totally programming-style dependant.
However, top-down programming style generates excessive steps that must be repeated for each related radial function.
There are other aspects of studying these algorithms. One discussion is about time complexity using the non-uniformed model. While the uniform model assumes the same cost for all the operations and input numbers even keeping large values, Non-uniform models let us know how the time complexity reacts to different numbers and operations. These models may be considered to be one of the future works to study theoretical time complexity of Zernike moments.
Even though computers have a large amount of memory nowadays, another issue is the space complexity of each algorithm, which is related to the amount of digital memory that each algorithm needs to be completed. For instance, in some methods, if the factorial terms are saved in a grid before starting the algorithm, the time complexity will reduce outstandingly. However, the device must specify a certain piece of memory to the algorithm to halt. As a result, the device may be more expensive. The issue is that how to balance between space complexity and time complexity to be both real time and cost-effective.

Compliance with Ethical Standards
This article does not contain any studies with animals or human performed by any of the authors.