^{1}

^{*}

^{2}

^{*}

^{2}

^{*}

^{3}

^{*}

A floating-point wavelet-based and an integer wavelet-based image interpolations in lifting structures and polynomial curve fitting for image resolution enhancement are proposed in this paper. The proposed prediction methods estimate high-frequency wavelet coefficients of the original image based on the available low-frequency wavelet coefficients, so that the original image can be reconstructed by using the proposed prediction method. To further improve the reconstruction performance, we use polynomial curve fitting to build relationships between actual high-frequency wavelet coefficients and estimated high-frequency wavelet coefficients. Results of the proposed prediction algorithm for different wavelet transforms are compared to show the proposed prediction algorithm outperforms other methods.

Resolution of an image has been always an important issue in many image- and video-processing applications. Image interpolation can be used for image resolution enhancement and many interpolation techniques have been developed to increase the quality of this task. Image and video codings, such as spatial scalability and transcoding, rely on image interpolation methods. Hence, these require fast and accurate interpolation methods.

Recently, various interpolation methods are proposed to improve the performance of image resolution enhancement, such as bilinear [

Another approach for image interpolation is wavelet-based method which can mitigate the problems of blurring and contrast abating. These algorithms predict the high-pass filtered coefficients, and the reconstructed image can be obtained by these estimated coefficients. Hence, the main problem of wavelet-based interpolation is how to estimate the high-pass coefficients correctly. In [

Kim et al. [

Invertible wavelet transforms that map integers to integers [

The paper is organized as follows: The standard interpolation method and a brief overview of wavelet transform and reversible integer wavelet transforms are introduced in Section 2. Section 3 derives the proposed methods. Experiment results for the proposed methods are given in Section 4. Finally, Section 5 provides conclusions and future works.

This section first introduces the common interpolation concept and its main drawback in Section 2.1. The detailed description of wavelet transform is in [

The definition of interpolation is to determine the parameters of a continuous image representation from a set of discrete points. The resolution enhancement process can be conceptually regarded as a two-step operation. Initially, the discrete data is interpolated into a continuous curve. Second, for the additional samples, we need to stuff those added points with values to be determined. Here, the simplest method―bilinear interpolation―is selected to demonstrate the process of interpolation. Let the original image be denoted by f and the interpolated image be f ˜ . In the following example, the interpolation ratio is assumed to be 2. In order to simplify the process, the one-dimensional linear interpolation is separately applied in vertical and horizontal directions of an image to achieve the two-dimensional bilinear interpolation. The bilinear interpolation with an interpolation ratio 2 can be formulated as

f ˜ ( 2 x ) = f ( x ) (1a)

f ˜ ( 2 x + 1 ) = f ( x ) + f ( x + 1 ) / 2 (1b)

where x = 0 , 1 , 2 , ⋯ . After performing the bilinear interpolation, the image can be zoomed in or out.

Wavelet-based interpolation is to predict high-frequency subbands, LH-band, HL-band, and HH-band, from the low-frequency subband.

Because bilinear interpolation assumes the original data are first-derivative continuous, the result is usually blurred when it interpolates the points at edges. Nevertheless, the wavelet-based interpolation can avoid this artifact by its good approximation property.

Wavelet transform is a valuable tool for image compression. It provides efficient time-frequency localization and multiresolution analysis; thus, it is suitable for image interpolation. Wavelet transformation decomposes data into different subbands hierarchically and each high-frequency subband can locate the regions of edges and details in the original image. _{0} and h_{1} are analysis filters, and g_{0} and g_{1} are synthesis filters. In the analysis step an input signal x is filtered with h_{0} and h_{1} and downsampled to generate the low-pass band s and the high-pass band d. In the synthesis step s and d are upsampled and filtered with g_{0} and g_{1}. The sum of the filter outputs results in the reconstructed signal x ^ .

Two-dimensional wavelet transform can be implemented using a one-dimensional filter on an image in each column vertically and in each row horizontally, which induces four subbands (i.e., LL-, LH-, HL-, and HH-band). The LL-band is the approximation of the original image, the LH-band represents vertical information, the HL-band represents horizontal information, and the HH-band represents diagonal information shown in

The wavelet transform uses two filters L and H to conduct the convolution operation. However, such convolution operation suffers from high computation cost and requires more memory for storage. Thus, an improved approach called lifting scheme for computing the discrete wavelet transform was developed. Any discrete wavelet transform can be computed with this scheme, and almost all these transforms have reduced computational complexity compared with the standard filtering algorithm. In this scheme a trivial wavelet transform, called lazy transform, is computed. This transform splits the input signal into even- and odd-indexed sequences.

s 0 [ n ] = x [ 2 n ] (2a)

d 0 [ n ] = x [ 2 n + 1 ] (2b)

Next, dual lifting and lifting steps are applied to obtain

d i [ n ] = d i − 1 [ n ] − ∑ k p i [ k ] s i − 1 [ n − k ] (3a)

s i [ n ] = s i − 1 [ n ] − ∑ k u i [ k ] d i [ n − k ] (3b)

s [ n ] = s M [ n ] K (4a)

d [ n ] = K d M [ n ] (4b)

We can find the inverse transform by reversing the operations and flipping the signs. The inverse transform is illustrated in

s M [ n ] = K s [ n ] (5a)

d M [ n ] = d [ n ] K (5b)

Then undo the M lifting steps and dual lifting steps to obtain

s i − 1 [ n ] = s i [ n ] + ∑ k u i [ k ] d i [ n − k ] (6a)

d i − 1 [ n ] = d i [ n ] + ∑ k p i [ k ] s i − 1 [ n − k ] (6b)

Finally, the even and odd samples is retrieved

x [ 2 n ] = s 0 [ n ] (7a)

x [ 2 n + 1 ] = d 0 [ n ] (7b)

The wavelet transform produces floating-point coefficients in most cases. Although this allows perfect reconstruction of the original image, the use of finite-precision

arithmetic and quantization results in lossy compression. For lossless compression, integer transform are needed. Traditionally, integer wavelet transform are difficulty to construct. However, the construction becomes very simple with lifting scheme. Since we can write every wavelet transform using lifting, it was shown that an integer version of every floating-point wavelet transform can be built by use of the lifting scheme. Integer wavelet transforms can be created by rounding-off the result of each dual lifting and lifting steps before adding and subtracting.

The dual lifting and the lifting step thus becomes

d i [ n ] = d i − 1 [ n ] − ⌊ ∑ k p i [ k ] s i − 1 [ n − k ] + 1 2 ⌋ (8a)

s i [ n ] = s i − 1 [ n ] − ⌊ ∑ k u i [ k ] d i [ n − k ] + 1 2 ⌋ (8b)

It is invertible and the inverse can be obtained by reversing the lifting and the dual lifting steps and flipping signs:

s i − 1 [ n ] = s i [ n ] + ⌊ ∑ k u i [ k ] d i [ n − k ] + 1 2 ⌋ (9a)

d i − 1 [ n ] = d i [ n ] + ⌊ ∑ k p i [ k ] s i − 1 [ n − k ] + 1 2 ⌋ (9b)

This obviously results in an integer to integer transform. But the coefficients p i [ k ] and u i [ k ] are not necessarily integers. Thus computing the integer transform coefficients still requires floating-point operations. However, all floating-point operations can be avoided when rational coefficients are power of two denominators in the transform. Here, we use the following integer wavelet transforms of the form ( N , N ˜ ) , where N is the number of vanishing moments of the analyzing high pass filter, while N ˜ is the number of vanishing moments of the synthesizing high pass filter. The S is for sequence, and the P is for prediction in S + P transform. ( 2 + 2 , 2 ) transform is inspired by the S + P transform, using one extra lifting step to build the earlier ( 2 , 2 ) into a transform with four vanishing moments of the high pass analyzing filter. The resulting transform is different from the earlier ( 4 , 2 ) transform and therefore is called the ( 2 + 2 , 2 ) transform. The analysis low pass filter of 9 / 7 − F has nine coefficients, while the analysis high pass filter has seven coefficients. Both analysis and synthesis high pass filters have four vanishing moments. 2/6 transform has two coefficients of the analysis low pass filter, while the analysis high pass filter has six coefficients. 2/6 transform is a version of the ( 3 , 1 ) transform.

S + P { d 1 [ n ] = d 0 [ n ] − s 0 [ n ] s [ n ] = s 0 [ n ] + ⌊ 1 2 d 1 [ n ] ⌋ d [ n ] = d 1 [ n ] + ⌊ 1 4 ( s [ n − 1 ] − s [ n ] ) + 3 8 ( s [ n ] − s [ n + 1 ] ) + 2 8 d 1 [ n + 1 ] + 1 2 ⌋ (10)

( 2 + 2 , 2 ) { d 1 [ n ] = d 0 [ n ] − ⌊ 1 2 ( s 0 [ n ] + s 0 [ n + 1 ] ) + 1 2 ⌋ s [ n ] = s 0 [ n ] + ⌊ 1 4 ( d 1 [ n − 1 ] + d 1 [ n ] ) + 1 2 ⌋ d [ n ] = d 1 [ n ] − ⌊ 1 16 ( − s [ n − 1 ] + s [ n ] + s [ n + 1 ] − s [ n + 2 ] ) + 1 2 ⌋ (11)

( 2 , 2 ) { d [ n ] = d 0 [ n ] − ⌊ 1 2 ( s 0 [ n ] + s 0 [ n + 1 ] ) + 1 2 ⌋ s [ n ] = s 0 [ n ] + ⌊ 1 4 ( d [ n − 1 ] + d [ n ] ) + 1 2 ⌋ (12)

( 4 , 2 ) { d [ n ] = d 0 [ n ] − ⌊ 9 16 ( s 0 [ n ] + s 0 [ n + 1 ] ) − 1 16 ( s 0 [ n − 1 ] + s 0 [ n + 2 ] ) + 1 2 ⌋ s [ n ] = s 0 [ n ] + ⌊ 1 4 ( d [ n − 1 ] + d [ n ] ) + 1 2 ⌋ (13)

( 2 , 4 ) { d [ n ] = d 0 [ n ] − ⌊ 1 2 ( s 0 [ n ] + s 0 [ n + 1 ] ) + 1 2 ⌋ s [ n ] = s 0 [ n ] + ⌊ 19 64 ( d [ n − 1 ] + d [ n ] ) − 3 64 ( d [ n − 2 ] + d [ n + 1 ] ) + 1 2 ⌋ (14)

( 6 , 2 ) { d [ n ] = d 0 [ n ] − ⌊ 75 128 ( s 0 [ n ] + s 0 [ n + 1 ] ) − 25 256 ( s 0 [ n − 1 ] + s 0 [ n + 2 ] ) + 3 256 ( s 0 [ n − 2 ] + s 0 [ n + 3 ] ) + 1 2 ⌋ s [ n ] = s 0 [ n ] + ⌊ 1 4 ( d [ n − 1 ] + d [ n ] ) + 1 2 ⌋ (15)

( 4 , 4 ) { d [ n ] = d 0 [ n ] − ⌊ 9 16 ( s 0 [ n ] + s 0 [ n + 1 ] ) − 1 16 ( s 0 [ n − 1 ] + s 0 [ n + 2 ] ) + 1 2 ⌋ s [ n ] = s 0 [ n ] + ⌊ 9 32 ( d [ n − 1 ] + d [ n ] ) − 1 32 ( d [ n − 2 ] + d [ n + 1 ] ) + 1 2 ⌋ (16)

2 / 6 { d 1 [ n ] = d 0 [ n ] − s 0 [ n ] s [ n ] = s 0 [ n ] + ⌊ 1 2 d 1 [ n ] ⌋ d [ n ] = d 1 [ n ] + ⌊ 1 4 ( s [ n − 1 ] − s [ n + 1 ] ) + 1 2 ⌋ (17)

9 / 7 − F { d 1 [ n ] = d 0 [ n ] − ⌊ 203 128 ( s 0 [ n + 1 ] + s 0 [ n ] ) + 1 2 ⌋ s 1 [ n ] = s 0 [ n ] − ⌊ 217 4096 ( d 1 [ n ] + d 1 [ n − 1 ] ) + 1 2 ⌋ d [ n ] = d 1 [ n ] + ⌊ 113 128 ( s 1 [ n + 1 ] + s 1 [ n ] ) + 1 2 ⌋ s [ n ] = s 1 [ n ] + ⌊ 1817 4096 ( d 1 [ n ] + d 1 [ n − 1 ] ) + 1 2 ⌋ (18)

This section illustrates the proposed prediction algorithm using the low-frequency coefficients to estimate the unknown high-frequency coefficients in order to enhance resolution of images to obtain a well-reconstructed image.

Padding zero values into high-frequency subbands is an easy way to reconstruct the image. However, it causes blurred image. The proposed prediction method can solve this problem. The proposed prediction algorithm can estimate high-frequency subbands by

L H ˜ = J R o w ∗ I C o l ∗ L L (19a)

H L ˜ = I R o w ∗ J C o l ∗ L L (19b)

H H ˜ = J R o w ∗ J C o l ∗ L L (19c)

where I is the proposed low-pass prediction filter and J is the proposed high-pass prediction filter to each row and column of an image, notation * represents a convolution operator, and LL is the input low-pass filtered image. The next subsection derives { I ( 4 , 2 ) , J ( 4 , 2 ) } according to ( 4 , 2 ) filters as an example to explain the algorithm.

The lifting of ( 4 , 2 ) filter is shown in

d i = 1 16 ⋅ x 2 i − 2 + 0 ⋅ x 2 i − 1 + ( − 9 16 ) ⋅ x 2 i + 1 ⋅ x 2 i + 1 + ( − 9 16 ) ⋅ x 2 i + 2 + 0 ⋅ x 2 i + 3 + 1 16 ⋅ x 2 i + 4 (20a)

s i = 1 64 ⋅ x 2 i − 4 + 0 ⋅ x 2 i − 3 + ( − 8 64 ) ⋅ x 2 i − 2 + 1 4 ⋅ x 2 i − 1 + 46 64 ⋅ x 2 i + 1 4 ⋅ x 2 i + 1 + ( − 8 64 ) ⋅ x 2 i + 2 + 0 ⋅ x 2 i + 3 + 1 64 ⋅ x 2 i + 4 (20b)

where i = 0 , 1 , 2 , ⋯ , ( M / 2 − 1 ) (M is the input data size). The boundary conditions of an input image are extended symmetrically by x i = x − i , for i < 0 and x i = x 2 M − i , for i > M − 1 .

d i and s i induce the corresponding low-pass filter [ 1 64 , 0 , ( − 8 64 ) , 1 4 , 46 64 , 1 4 , ( − 8 64 ) , 0 , 1 64 ] and the high-pass filter [ 1 16 , 0 , ( − 9 16 ) , 1 , ( − 9 16 ) , 0 , 1 16 ] .

In the prediction problem, we need to find out the relationships between the input and the low-band images. In (20b), s i shows the input image has nine sequential pixels connected to one wavelet coefficient in the low-band image by dotted lines in

x ˜ 2 i = a 0 ⋅ s i − 2 + a 1 ⋅ s i − 1 + a 2 ⋅ s i + a 3 ⋅ s i + 1 + a 4 ⋅ s i + 2 (21a)

and

x ˜ 2 i + 1 = b 0 ⋅ s i − 1 + b 1 ⋅ s i + b 2 ⋅ s i + 1 + b 3 ⋅ s i + 2 (21b)

respectively, where i = 0 , 1 , 2 , ⋯ , ( M / 2 − 1 ) and a 0 , a 1 , a 2 , a 3 , a 4 and b 0 , b 1 , b 2 , b 3 have constraints as below

a 0 + a 1 + a 2 + a 3 + a 4 = 1 (22a)

and

b 0 + b 1 + b 2 + b 3 = 1 (22b)

Then, substituting x ˜ 2 i , x ˜ 2 i + 1 in (21a) and (21b) to x 2 i , x 2 i + 1 in (20a) and (20b) yields

s ˜ i = ( 1 64 ) ( a 0 ⋅ s i − 4 + a 1 ⋅ s i − 3 + a 2 ⋅ s i − 2 + a 3 ⋅ s i − 1 + a 4 ⋅ s i ) + ( 0 ) ( b 0 ⋅ s i − 3 + b 1 ⋅ s i − 2 + b 2 ⋅ s i − 1 + b 3 ⋅ s i ) + ( − 8 64 ) ( a 0 ⋅ s i − 3 + a 1 ⋅ s i − 2 + a 2 ⋅ s i − 1 + a 3 ⋅ s i + a 4 ⋅ s i + 1 ) + ( 1 4 ) ( b 0 ⋅ s i − 2 + b 1 ⋅ s i − 1 + b 2 ⋅ s i + b 3 ⋅ s i + 1 ) + ( 46 64 ) ( a 0 ⋅ s i − 2 + a 1 ⋅ s i − 1 + a 2 ⋅ s i + a 3 ⋅ s i + 1 + a 4 ⋅ s i + 2 )

+ ( 1 4 ) ( b 0 ⋅ s i − 1 + b 1 ⋅ s i + b 2 ⋅ s i + 1 + b 3 ⋅ s i + 2 ) + ( − 8 64 ) ( a 0 ⋅ s i − 1 + a 1 ⋅ s i + a 2 ⋅ s i + 1 + a 3 ⋅ s i + 2 + a 4 ⋅ s i + 3 ) + ( 0 ) ( b 0 ⋅ s i + b 1 ⋅ s i + 1 + b 2 ⋅ s i + 2 + b 3 ⋅ s i + 3 ) + ( 1 64 ) ( a 0 ⋅ s i + a 1 ⋅ s i + 1 + a 2 ⋅ s i + 2 + a 3 ⋅ s i + 3 + a 4 ⋅ s i + 4 )

= ( a 0 64 ) ⋅ s i − 4 + ( − 8 a 0 64 + a 1 64 ) ⋅ s i − 3 + ( 46 a 0 64 − 8 a 1 64 + a 2 64 + b 0 4 ) ⋅ s i − 2 + ( − 8 a 0 64 + 46 a 1 64 − 8 a 2 64 + a 3 64 + b 0 4 + b 1 4 ) ⋅ s i − 1 + ( a 0 64 − 8 a 1 64 + 46 a 2 64 − 8 a 3 64 + a 4 64 + b 1 4 + b 2 4 ) ⋅ s i + ( a 1 64 − 8 a 2 64 + 46 a 3 64 + b 2 4 + b 3 4 ) ⋅ s i + 1 + ( a 2 64 − 8 a 3 64 + 46 a 4 64 + b 3 4 ) ⋅ s i + 2 + ( a 3 64 − 8 a 4 64 ) ⋅ s i + 3 + ( a 4 64 ) ⋅ s i + 4 (23a)

d ˜ i = ( 1 16 ) ( a 0 ⋅ s i − 3 + a 1 ⋅ s i − 2 + a 2 ⋅ s i − 1 + a 3 ⋅ s i + a 4 ⋅ s i + 1 ) + ( 0 ) ( b 0 ⋅ s i − 2 + b 1 ⋅ s i − 1 + b 2 ⋅ s i + b 3 ⋅ s i + 1 ) + ( − 9 16 ) ( a 0 ⋅ s i − 2 + a 1 ⋅ s i − 1 + a 2 ⋅ s i + a 3 ⋅ s i + 1 + a 4 ⋅ s i + 2 ) + ( 1 ) ( b 0 ⋅ s i − 1 + b 1 ⋅ s i + b 2 ⋅ s i + 1 + b 3 ⋅ s i + 2 ) + ( − 9 16 ) ( a 0 ⋅ s i − 1 + a 1 ⋅ s i + a 2 ⋅ s i + 1 + a 3 ⋅ s i + 2 + a 4 ⋅ s i + 3 )

+ ( 0 ) ( b 0 ⋅ s i + b 1 ⋅ s i + 1 + b 2 ⋅ s i + 2 + b 3 ⋅ s i + 3 ) + ( 1 16 ) ( a 0 ⋅ s i + a 1 ⋅ s i + 1 + a 2 ⋅ s i + 2 + a 3 ⋅ s i + 3 + a 4 ⋅ s i + 4 ) = ( a 0 16 ) ⋅ s i − 3 + ( − 9 a 0 16 + a 1 16 ) ⋅ s i − 2 + ( − 9 a 0 16 − 9 a 1 16 + a 2 16 + b 0 ) ⋅ s i − 1 + ( a 0 16 − 9 a 1 16 − 9 a 2 16 + a 3 16 + b 1 ) ⋅ s i + ( a 1 16 − 9 a 2 16 − 9 a 3 16 + a 4 16 + b 2 ) ⋅ s i + 1 + ( a 2 16 − 9 a 3 16 − 9 a 4 16 + b 3 ) ⋅ s i + 2 + ( a 3 16 − 9 a 4 16 ) ⋅ s i + 3 + ( a 4 16 ) ⋅ s i + 4 (23b)

which implies the low-pass prediction filter I ( 4 , 2 ) and the high-pass prediction filter J ( 4 , 2 ) are respectively given by

I ( 4 , 2 ) = [ ( a 0 64 ) , ( − 8 a 0 64 + a 1 64 ) , ( 46 a 0 64 − 8 a 1 64 + a 2 64 + b 0 4 ) , ( − 8 a 0 64 + 46 a 1 64 − 8 a 2 64 + a 3 64 + b 0 4 + b 1 4 ) , ( a 0 64 − 8 a 1 64 + 46 a 2 64 − 8 a 3 64 + a 4 64 + b 1 4 + b 2 4 ) , ( a 1 64 − 8 a 2 64 + 46 a 3 64 + b 2 4 + b 3 4 ) , ( a 2 64 − 8 a 3 64 + 46 a 4 64 + b 3 4 ) , ( a 3 64 − 8 a 4 64 ) , ( a 4 64 ) ] (24a)

J ( 4 , 2 ) = [ ( a 0 16 ) , ( − 9 a 0 16 + a 1 16 ) , ( − 9 a 0 16 − 9 a 1 16 + a 2 16 + b 0 ) , ( a 0 16 − 9 a 1 16 − 9 a 2 16 + a 3 16 + b 1 ) , ( a 1 16 − 9 a 2 16 − 9 a 3 16 + a 4 16 + b 2 ) , ( a 2 16 − 9 a 3 16 − 9 a 4 16 + b 3 ) , ( a 3 16 − 9 a 4 16 ) , ( a 4 16 ) ] (24b)

The prediction filters I ( 4 , 2 ) and J ( 4 , 2 ) should be symmetric and the low-pass wavelet filter of ( 4 , 2 ) is dominated by the intermediate coefficient (46/64 in (20b)), thus they have extra constraints as following

a 0 = a 4 (25a)

a 1 = a 3 (25b)

b 0 = b 3 (25c)

b 1 = b 2 (25d)

a 2 > a 1 > a 0 (25e)

b 1 > b 0 (25f)

By setting a 0 = 0 , a 1 = 1 / 4 , a 2 = 1 / 2 , a 3 = 1 / 4 , a 4 = 0 and b 0 = 0 , b 1 = 1 / 2 , b 2 = 1 / 2 , b 3 = 0 empirically as one solution for general images, one has

I ( 4 , 2 ) = [ 0 , 1 256 , ( − 3 128 ) , 63 256 , 35 64 , 63 256 , ( − 3 128 ) , 1 256 , 0 ] (26a)

J ( 4 , 2 ) = [ 0 , 1 64 , ( − 7 64 ) , 3 32 , 3 32 , ( − 7 64 ) , 1 64 , 0 ] (26b)

As a result, the unknown subbands ( L H ˜ , H L ˜ , H H ˜ ) can be constructed by substituting I ( 4 , 2 ) to I and J ( 4 , 2 ) to J in (19a)-(19c).

If we know the actual high-frequency coefficients of the reconstructed image, then we can build the relationship between actual high-frequency coefficients and estimated high-frequency coefficients. The relationship improves prediction algorithm which is based on polynomial curve fitting. In Section 4, we denote this situation as one layer. If we do not know the actual high-frequency coefficients of the reconstructed image, we can decompose the original low-frequency image into four subbands as layer two in wavelet transform of the reconstructed image. And we use LL in layer two to predict the other three high-frequency subbands in layer two. Then, we can build the relationship between actual high-frequency coefficients and estimated high-frequency coefficients in layer two to improve prediction algorithm on polynomial curve fitting. In Section 4, we denote this situation as two layers. By denoting the exact coefficients at position ( m , n ) in LH as y L H ( m , n ) and the estimated coefficients at position ( m , n ) in LH as x L H ( m , n ) . Applying to all coefficients of LH, we can obtain the weights a L H and b L H by

y L H ( m , n ) = a L H × x L H ( m , n ) + b L H (27)

These weights are subsequently used to gain accuracy of the coefficients of the estimated LH subband. Then, we round-off the result to get the integer version of wavelet coefficients of LH.

y ′ L H ( m , n ) = ⌊ a L H × x L H ( m , n ) + b L H ⌋ , (28)

where y ′ L H ( m , n ) is the improved integer coefficient at position ( m , n ) .

A similar process is possible for HL and HH subbands. After coefficient improvement, y ′ L H , y ′ H L and y ′ H H are carried out as above, then the high-resolution image is obtained by applying the inverse integer wavelet transform.

Integer analysis wavelet filter | Low-pass/High-pass prediction filter Coefficients | |
---|---|---|

S + P | I S + P | [0.5000] |

J S + P | [0.1250, 0.0625, −0.1875] | |

( 2 + 2 , 2 ) | I ( 2 + 2 , 2 ) | [−0.0313, 0.2500, 0.5625, 0.2500, −0.0313] |

J ( 2 + 2 , 2 ) | [−0.0020, 0.0176, −0.1035, 0.0879, 0.0879, −0.1035, 0.0176, −0.0020] | |

( 2 , 2 ) | I ( 2 , 2 ) | [−0.0313, 0.2500, 0.5625, 0.2500, −0.0313] |

J ( 2 , 2 ) | [−0.1250, 0.1250, 0.1250, −0.1250] | |

( 4 , 2 ) | I ( 4 , 2 ) | [0, 0.0039, −0.0234, 0.2461, 0.5469, 0.2461, −0.0234, 0.0039, 0] |

J ( 4 , 2 ) | [0, 0.0156, −0.1094, 0.0938, 0.0938, −0.1094, 0.0156, 0] | |

( 2 , 4 ) | I ( 2 , 4 ) | [0, 0.0059, −0.0430, 0.2441, 0.5859, 0.2441, −0.0430, 0.0059, 0] |

J ( 2 , 4 ) | [0, −0.1250, 0.1250, 0.1250, −0.1250, 0] | |

( 6 , 2 ) | I ( 6 , 2 ) | [0, −0.0002, 0.0008, −0.0051, 0.0627, 0.1927, 0.4979, 0.1927, 0.0627, −0.0051, 0.0008, −0.0002, 0] |

J ( 6 , 2 ) | [0, −0.0075, 0.0039, −0.0242, 0.0251, −0.0042, −0.0042, 0.0251, −0.0242, 0.0039, −0.0007, 0] | |

( 4 , 4 ) | I ( 4 , 4 ) | [0, −0.0001, 0.0018, −0.0060, 0.0601, 0.1936, 0.5012, 0.1936, 0.0601, −0.0060, 0.0018, −0.0001, 0] |

J ( 4 , 4 ) | [0, 0.0039, −0.0234, 0.0156, 0.0039, 0.0039, 0.0156, −0.0234, 0.0039, 0] | |

2/6 | I 2 / 6 | [0.5000] |

J 2 / 6 | [0.1250, 0, −0.1250] | |

9 / 7 − F | I 9 / 7 − F | [0, 0.0067, −0.0146, 0.2433, 0.5292, 0.2433, −0.0146, 0.0067, 0] |

J 9 / 7 − F | [0, 0.0228, −0.1310, 0.1081, 0.1081, −0.1310, 0.0228, 0] |

shows prediction filters of all integer wavelet filters used in this paper.

In this paper, we evaluate the quality of the proposed interpolation method. The test images are shown in

Tables 2-8 show that the proposed interpolation method reconstructed for test images in each integer wavelet transform for a zooming ratio 1/2. These test images are reduced by the different integer wavelet transforms and then the reduced images are enlarged with the proposed method. One layer in the table represents that the original input images are decomposed into one layer and the LL-band in layer one is used to predict the other high frequency subbands. Then build the linear relations between predicted high frequency coefficients in layer one and actual high frequency coefficients in layer one to reconstruct the original images. Two layers in the table represent that the original input images are decomposed into two layers and the LL-band in layer two is used to predict the other high frequency subbands. Then build the linear relations between predicted high frequency coefficients in layer two and actual high frequency coefficients in layer two to reconstruct the original images. Tables 2-8 show that the linear relationship built from one layer decomposition and two layers decomposition has similar effects.

Tables 9-15 show that reconstructed images using proposed method in different wavelet transforms and different integer wavelet transforms which are compared with the original image in PSNR. Tables 9-15 present that the estimated high-frequency coefficients in floating-point are better than the estimated high-frequency coefficients in integer, but the advantages in compression for integer wavelet transform can let us tolerate the distortion in integer.

The proposed method is compared with other interpolation methods, such as, zero padding, [

Lena | ( 2 , 2 ) | ( 2 , 4 ) | ( 4 , 2 ) | ( 4 , 4 ) | ( 6 , 2 ) | 9 / 7 − F | 2/6 | S + P | ( 2 + 2 , 2 ) |
---|---|---|---|---|---|---|---|---|---|

Integer/Two layers | 35.85 | 36.03 | 36.01 | 36.00 | 36.06 | 35.84 | 35.29 | 35.54 | 35.93 |

Integer/One layer | 35.86 | 36.02 | 36.01 | 36.01 | 36.07 | 35.85 | 35.29 | 35.53 | 35.97 |

Baboon | ( 2 , 2 ) | ( 2 , 4 ) | ( 4 , 2 ) | ( 4 , 4 ) | ( 6 , 2 ) | 9 / 7 − F | 2/6 | S + P | ( 2 + 2 , 2 ) |
---|---|---|---|---|---|---|---|---|---|

Integer/Two layers | 24.14 | 24.30 | 24.18 | 24.32 | 24.20 | 24.28 | 24.01 | 23.97 | 24.20 |

Integer/One layer | 24.20 | 24.32 | 24.23 | 24.32 | 24.21 | 24.28 | 24.02 | 23.98 | 24.20 |

Barbara | ( 2 , 2 ) | ( 2 , 4 ) | ( 4 , 2 ) | ( 4 , 4 ) | ( 6 , 2 ) | 9 / 7 − F | 2/6 | S + P | ( 2 + 2 , 2 ) |
---|---|---|---|---|---|---|---|---|---|

Integer/Two layers | 25.84 | 25.90 | 25.72 | 25.81 | 25.52 | 25.86 | 25.71 | 25.58 | 25.62 |

Integer/One layer | 26.06 | 26.00 | 26.11 | 26.03 | 25.92 | 26.00 | 25.91 | 25.76 | 26.13 |

Boat | ( 2 , 2 ) | ( 2 , 4 ) | ( 4 , 2 ) | ( 4 , 4 ) | ( 6 , 2 ) | 9 / 7 − F | 2/6 | S + P | ( 2 + 2 , 2 ) |
---|---|---|---|---|---|---|---|---|---|

Integer/Two layers | 30.26 | 30.24 | 30.70 | 30.77 | 30.76 | 30.59 | 30.45 | 30.54 | 30.69 |

Integer/One layer | 30.73 | 30.89 | 30.77 | 30.82 | 30.78 | 30.79 | 30.46 | 30.55 | 30.75 |

Peppers | ( 2 , 2 ) | ( 2 , 4 ) | ( 4 , 2 ) | ( 4 , 4 ) | ( 6 , 2 ) | 9 / 7 − F | 2/6 | S + P | ( 2 + 2 , 2 ) |
---|---|---|---|---|---|---|---|---|---|

Integer/Two layers | 32.51 | 32.53 | 32.86 | 33.03 | 33.00 | 32.52 | 31.99 | 32.11 | 32.92 |

Integer/One layer | 33.10 | 33.18 | 33.16 | 33.30 | 33.19 | 32.85 | 31.99 | 32.11 | 33.16 |

Medical 1 | ( 2 , 2 ) | ( 2 , 4 ) | ( 4 , 2 ) | ( 4 , 4 ) | ( 6 , 2 ) | 9 / 7 − F | 2/6 | S + P | ( 2 + 2 , 2 ) |
---|---|---|---|---|---|---|---|---|---|

Integer/Two layers | 37.24 | 37.21 | 37.23 | 37.30 | 37.20 | 37.25 | 36.73 | 36.77 | 37.29 |

Integer/One layer | 37.28 | 37.30 | 37.22 | 37.31 | 37.19 | 37.27 | 36.73 | 36.77 | 37.28 |

Medical 2 | ( 2 , 2 ) | ( 2 , 4 ) | ( 4 , 2 ) | ( 4 , 4 ) | ( 6 , 2 ) | 9 / 7 − F | 2/6 | S + P | ( 2 + 2 , 2 ) |
---|---|---|---|---|---|---|---|---|---|

Integer/Two layers | 44.12 | 44.17 | 44.34 | 44.39 | 44.25 | 43.05 | 43.85 | 43.73 | 44.47 |

Integer/One layer | 44.33 | 44.36 | 44.34 | 44.39 | 44.31 | 43.07 | 43.85 | 43.73 | 44.46 |

Lena | ( 2 , 2 ) | ( 2 , 4 ) | ( 4 , 2 ) | ( 4 , 4 ) | ( 6 , 2 ) | 9 / 7 − F | 2/6 | S + P | ( 2 + 2 , 2 ) |
---|---|---|---|---|---|---|---|---|---|

Floating-point/One layer | 35.90 | 36.09 | 36.04 | 36.04 | 36.08 | 36.16 | 35.37 | 35.63 | 36.01 |

Integer/One layer | 35.86 | 36.02 | 36.01 | 36.01 | 36.07 | 35.85 | 35.29 | 35.53 | 35.97 |

Baboon | ( 2 , 2 ) | ( 2 , 4 ) | ( 4 , 2 ) | ( 4 , 4 ) | ( 6 , 2 ) | 9 / 7 − F | 2/6 | S + P | ( 2 + 2 , 2 ) |
---|---|---|---|---|---|---|---|---|---|

Floating-point/One layer | 24.20 | 24.33 | 24.22 | 24.33 | 24.20 | 24.31 | 24.03 | 23.99 | 24.20 |

Integer/One layer | 24.20 | 24.32 | 24.23 | 24.32 | 24.21 | 24.28 | 24.02 | 23.98 | 24.20 |

Barbara | ( 2 , 2 ) | ( 2 , 4 ) | ( 4 , 2 ) | ( 4 , 4 ) | ( 6 , 2 ) | 9 / 7 − F | 2/6 | S + P | ( 2 + 2 , 2 ) |
---|---|---|---|---|---|---|---|---|---|

Floating-point/One layer | 26.07 | 26.01 | 26.12 | 26.04 | 25.93 | 26.05 | 25.93 | 25.78 | 26.13 |

Integer/One layer | 26.06 | 26.00 | 26.11 | 26.03 | 25.92 | 26.00 | 25.91 | 25.76 | 26.13 |

Boat | ( 2 , 2 ) | ( 2 , 4 ) | ( 4 , 2 ) | ( 4 , 4 ) | ( 6 , 2 ) | 9 / 7 − F | 2/6 | S + P | ( 2 + 2 , 2 ) |
---|---|---|---|---|---|---|---|---|---|

Floating-point/One layer | 30.74 | 30.91 | 30.78 | 30.84 | 30.78 | 30.92 | 30.49 | 30.59 | 30.75 |

Integer/One layer | 30.73 | 30.89 | 30.77 | 30.82 | 30.78 | 30.79 | 30.46 | 30.55 | 30.75 |

Peppers | ( 2 , 2 ) | ( 2 , 4 ) | ( 4 , 2 ) | ( 4 , 4 ) | ( 6 , 2 ) | 9 / 7 − F | 2/6 | S + P | ( 2 + 2 , 2 ) |
---|---|---|---|---|---|---|---|---|---|

Floating-point/One layer | 33.12 | 33.22 | 33.17 | 33.32 | 33.24 | 33.05 | 32.05 | 32.16 | 33.15 |

Integer/One layer | 33.10 | 33.18 | 33.16 | 33.30 | 33.19 | 32.85 | 31.99 | 32.11 | 33.16 |

Medical 1 | ( 2 , 2 ) | ( 2 , 4 ) | ( 4 , 2 ) | ( 4 , 4 ) | ( 6 , 2 ) | 9 / 7 − F | 2/6 | S + P | ( 2 + 2 , 2 ) |
---|---|---|---|---|---|---|---|---|---|

Floating-point/One layer | 37.31 | 37.44 | 37.32 | 37.43 | 37.30 | 37.44 | 36.85 | 36.90 | 37.29 |

Integer/One layer | 37.28 | 37.30 | 37.22 | 37.31 | 37.19 | 37.27 | 36.73 | 36.77 | 37.28 |

Medical 2 | ( 2 , 2 ) | ( 2 , 4 ) | ( 4 , 2 ) | ( 4 , 4 ) | ( 6 , 2 ) | 9 / 7 − F | 2/6 | S + P | ( 2 + 2 , 2 ) |
---|---|---|---|---|---|---|---|---|---|

Floating-point/One layer | 44.52 | 44.61 | 44.51 | 44.60 | 44.47 | 44.61 | 44.44 | 44.40 | 44.49 |

Integer/One layer | 44.33 | 44.36 | 44.34 | 44.39 | 44.31 | 43.07 | 43.85 | 43.73 | 44.46 |

for ( 2 , 2 ) wavelet filter with zooming ratio 1/2 in

The results for ( 2 , 2 ) wavelet filter in

( 2 , 2 ) | Zero padding | [ | [ | DT-CWT-NLM [ | SWT-DWT [ | Proposed Method (integer) | Proposed Method (floating-point) |
---|---|---|---|---|---|---|---|

Lena | 34.60 | 35.26 | 35.33 | 29.08 | 31.60 | 35.86 | 35.90 |

Baboon | 24.12 | 24.11 | 24.19 | 21.28 | 21.73 | 24.20 | 24.20 |

Barbara | 25.77 | 25.53 | 25.85 | 22.83 | 23.28 | 26.06 | 26.07 |

Boat | 30.29 | 30.68 | 30.73 | 26.06 | 27.66 | 30.73 | 30.74 |

Peppers | 30.87 | 33.07 | 33.13 | 28.73 | 29.91 | 33.10 | 33.12 |

Medical 1 | 36.81 | 37.13 | 37.23 | 34.32 | 34.89 | 37.28 | 37.31 |

Medical 2 | 44.41 | 44.40 | 44.64 | 37.52 | 40.19 | 44.33 | 44.52 |

9 / 7 − F | Zero padding | [ | [ | DT-CWT-NLM [ | SWT-DWT [ | Proposed Method (integer) | Proposed Method (floating-point) |
---|---|---|---|---|---|---|---|

Lena | 35.33 | 35.48 | 35.63 | 29.48 | 33.03 | 35.85 | 36.16 |

Baboon | 24.28 | 24.27 | 24.30 | 21.90 | 22.84 | 24.28 | 24.31 |

Barbara | 25.80 | 25.67 | 25.91 | 23.48 | 24.49 | 26.00 | 26.05 |

Boat | 30.74 | 30.87 | 30.93 | 26.56 | 28.72 | 30.79 | 30.92 |

Peppers | 31.37 | 33.03 | 33.22 | 29.14 | 30.94 | 32.85 | 33.05 |

Medical 1 | 37.11 | 37.26 | 37.34 | 34.62 | 35.76 | 37.27 | 37.44 |

Medical 2 | 44.54 | 44.52 | 44.75 | 36.37 | 40.90 | 43.07 | 44.61 |

input images are continuous, so it is suitable to predict images which are smooth and soft, and not so well with images with a lot of edges and high frequency components. Peppers and Medical 2 have worse results because of the continuous assumption. By applying the linear relation to tune the coefficients in the proposed method, it reconstructs better results. Literature [

One can find which filter is the best in different condition from

Conditions Images | Floating-point/Two layers | Floating-point/One layer | Integer/Two layers | Integer/One layer |
---|---|---|---|---|

Lena | 9 / 7 − F | 9 / 7 − F | ( 6 , 2 ) | ( 6 , 2 ) |

Baboon | ( 4 , 4 ) | ( 2 , 4 ) ( 4 , 4 ) | ( 4 , 4 ) | ( 2 , 4 ) ( 4 , 4 ) |

Barbara | ( 2 , 4 ) | ( 2 + 2 , 2 ) | ( 2 , 4 ) | ( 2 + 2 , 2 ) |

Boat | ( 4 , 4 ) | 9 / 7 − F | ( 4 , 4 ) | ( 2 , 4 ) |

Peppers | ( 4 , 4 ) | ( 4 , 4 ) | ( 4 , 4 ) | ( 4 , 4 ) |

Medical 1 | ( 4 , 4 ) 9 / 7 − F | ( 2 , 4 ) 9 / 7 − F | ( 4 , 4 ) | ( 4 , 4 ) |

Medical 2 | ( 2 , 4 ) 9 / 7 − F | ( 2 , 4 ) 9 / 7 − F | ( 2 + 2 , 2 ) | ( 2 + 2 , 2 ) |

algorithm.

In this paper, wavelet-based image interpolation for high performance image resolution enhancement is proposed to predict the detail coefficients in the original image from the low-pass filtered image by observing the wavelet transform in lifting scheme. The proposed method can predict the vertical and horizontal subbands and the diagonal subband, so the proposed method is quite suitable to reconstruct the original image. Then, we utilize polynomial curve fitting to build the linear relationships between the actual high frequency subbands and the predicted high frequency subbands. These relationships improve the accuracy of prediction algorithm. Reconstructed images by the proposed method are also in the form of integers. Experimental results compare the performances of different integer wavelet transforms with the proposed method in some well-known natural images. The proposed methods have been shown better performance than bilinear, bicubic, and other wavelet-based methods.

This work was supported by the Ministry of Science and Technology of Republic of China under contacts, MOST 107-2221-E-006-203-MY2 and MOST 105-2221-E-006-102-MY3.

The authors declare no conflicts of interest regarding the publication of this paper.

Chen, C.-Y., Guo, S.-M., Tsai, C.-H. and Tsai, J.S.-H. (2018) Integer Wavelet-Based Image Interpolation in Lifting Structure for Image Resolution Enhancement. Applied Mathematics, 9, 1156-1178. https://doi.org/10.4236/am.2018.910077