^{1}

^{2}

Intensity-hue-saturation (IHS) transform is the most commonly used method for image fusion purpose. Usually, the intensity image is replaced by Panchromatic (PAN) image, or the difference between PAN and intensity image is added to each bands of RGB images. Spatial structure information in the PAN image can be effectively injected into the fused multi-spectral (MS) images using IHS method. However, spectral distortion has become the typical factor deteriorating the quality of fused results. A hybrid image fusion method which integrates IHS and minimum mean-square-error (MMSE) was proposed to mitigate the spectral distortion phenomenon in this study. Firstly, IHS transform was used to derive the intensity image; secondly, the MMSE algorithm was used to fuse the histogram matched PAN image and intensity image; thirdly, optimization calculation was employed to derive the combination coefficients, and the new intensity image could be expressed as the combination of intensity image and PAN image. Fused MS images with high spatial resolution can be generated by inverse IHS transform. In numerical experiments, QuickBird images were used to evaluate the performance of the proposed algorithm. It was found that the spatial resolution was increased significantly; meanwhile, spectral distortion phenomenon was abated in the fusion results.

Multispectral (MS) remote sensed imagery, reflecting the radiance from different land covers by more spectral bands, has the performance of accurately mapping the land surface composition. However, these multispectral sensors usually have lower spatial resolution, which limits their applications in mapping the complex land surface morphological structure. High spatial resolution remote sensed imagery, obtained from commercial satellite sensors, has the potential to give more accurate descriptions of urban surface, and has been used extensively in the fields of urban planning, urban building extraction and decision supporting [

Image fusion or pan-sharpening method is a technique producing images with high spatial and spectral resolution simultaneously, by injecting the spatial detail information in higher resolution panchromatic (PAN) image into the MS channels [

Pan-sharpening means to use a panchromatic image to sharpen the multispectral images. There are several steps in a pan-sharpen algorithm. Firstly, registration between the PAN and MS images is made to get the spatial aligned images, which is a pivotal process to attain effective fusion results. Secondly, spatial information is extracted from the high resolution using a certain algorithm such as wavelet transform, intensity-hue-saturation (IHS) transform and principal component transform. Thirdly, the extracted spatial information is injected into the MS images to sharpen the spatial resolution meanwhile preserve the spectral information contained in the MS images. Finally, assessment will be made to evaluate the effectiveness of the pan-sharpen results. Another key point in this process lies in the mechanism of extraction and injecting of spatial information, which has become the hotspot issue in the applications of remotely sensed imageries.

Lots of pan-sharpening methods have been proposed in the past twenty years. These algorithms can be categorized into four categories: projection substitution methods, numerical methods, multi-resolution analysis based methods and hybrid methods.

IHS transformation and PCA transformation are two representative methods in the projection substitution fusion methods. In IHS transform, MS images are converted from Red-Green-Blue (RGB) color space into the intensity-hue-saturation color space, and then the intensity image, which mainly contains the low resolution spatial detail information, is substituted by the histogram matched PAN image. Fusion results are attained by inverse transforming from IHS to RGB color space [

PCA-based fusion was commonly used due to the uncorrelated property among the principal components after the PCA transform. The first principal component, which was considered as containing enough spatial information due to the largest variance compared with the remains principal components, was replaced by the histogram matched PAN image [

In the numerical fusion algorithms, PAN image is assumed as the linear combination of the original high resolution MS bands, such that the combination coefficients will be estimated using the degraded low resolution MS bands [

Lots of attentions have been paid on the multi-resolution analysis based methods. Idea behind such methods is that the missing spatial information in MS images can be inferred from the high frequencies, which is the foundation of ARSIS concept [

Due to the limitation among different kinds of fusion algorithms, hybrid algorithms such as IHS and Wavelet, PCA and Wavelet, IHS and Contourlet, are used to give better fusion results. Intensity image or the first principal component will be extracted using the corresponding transform, and then wavelet decomposition will be used on the intensity image and PAN image simultaneously. The wavelet coefficient corresponding to the approximant part of the intensity image will be replaced by PAN image’s approximant wavelet coefficients. The fused MS image will be induced by inverse wavelet transform. Usually, better fusion effectives will be obtained using the hybrid algorithms.

As pointed by Tu [

Outline of this paper is as follows. A brief introduction is given in the first section. Then, the proposed hybrid fusion algorithm is introduced in section 2. Numerical experiments and results are shown in section 3. Section 4 gives the discussion of the experimental results and conclusions are made in Section 5.

Based on the fact that the fused high resolution MS images contain the spatial information coming from low resolution MS images and the panchromatic image, the proposed hybrid pansharpen method utilized the optimal component coefficients of the MS images and the panchromatic image to get the optimum fusion result. The flowchart of the hybrid pan-sharpen method is shown in

IHS transform is extensively used to convert the MS images from RGB color space into the IHS color space. The Intensity image contains most of the spatial information of the scene, while hue image and saturation image reflect the spectral information of the same land cover. Compared with the PAN image, Intensity image has lower spatial resolution, which makes the MS images shortage of spatial information. Therefore, usually, Intensity image is replaced by the histogram matched PAN image to increase the spatial structure of the MS images. Standard IHS-based fusion algorithm is introduced briefly as the following four steps.

Firstly, band combination of MS images is used to form the RGB components, and then, the low spatial resolution RGB images are resized by upsampling to match the size of the high spatial resolution PAN image [

Secondly, IHS transform is made to convert the images from RGB color space into IHS color space using Equation (1).

[ I v 1 v 2 ] = [ 1 3 1 3 1 3 − 2 6 − 2 6 2 2 6 1 2 − 1 2 0 ] [ R G B ] (1)

where v 1 and v 2 are the variables in the computation. Hue and saturation components in the IHS space are given as

H = tan − 1 ( v 2 v 1 ) , S = v 1 2 + v 2 2 (2)

Thirdly, the Intensity image I is replaced by the histogram matched PAN image. Finally, inverse IHS transform is used to get the fused MS images using Equation (3).

[ R ′ G ′ B ′ ] = [ 1 − 1 2 1 2 1 − 1 2 − 1 2 1 2 0 ] [ PAN v 1 v 2 ] (3)

where R ′ , G ′ and B ′ are the fused MS images.

Tu [

[ R ′ G ′ B ′ ] = [ 1 − 1 2 1 2 1 − 1 2 − 1 2 1 2 0 ] [ I + ( PAN − I ) v 1 v 2 ] = [ 1 − 1 2 1 2 1 − 1 2 − 1 2 1 2 0 ] [ I + δ v 1 v 2 ] = [ R + δ G + δ B + δ ] (4)

where

δ = PAN − I . (5)

It was found that the spectral distortion mainly due to the change of saturation value, i.e., “the saturation value is expanded and stretched ( S ′ > S ) , when the PAN value is less than its corresponding I value; the saturation value is compressed ( S ′ < S ) when PAN value is larger than the I value” [

To give a concise description of the hybrid pan-sharpen algorithm, some symbols are used to refer to the images and variables. Let M i , i = 1 , ⋯ , N , which has size of N r × N c , denote the ith band of N MS images. Matrix P is the PAN image which has the size of r N r × r N c , where r is the ratio of spatial resolution between the PAN image and the MS image. For example, r equals four for QuickBird sensor. M i and P also is used to denote the lexicographically ordered vector which have the size of N r N c × 1 and r 2 N r N c × 1 , respectively.

According to the ratio r of spatial resolution between PAN image and MS image, low resolution MS images are upsampled to get new MS images which have same size with that of PAN image. Then, IHS transform is used to convert the new MS images from RGB color space into IHS color space to get three component images: intensity image I, hue image H, saturation image S.

New PAN image P_{1} can be deduced using histogram match by Equation (6).

P 1 = ( P − μ P ) ∗ σ I σ P + μ I (6)

where μ P , μ I are mean value of PAN image and Intensity image respectively, and σ I , σ P are standard deviation of PAN image and intensity image, respectively.

The new intensity image I ˜ , which will be estimated using optimization algorithm, can be written as

I ˜ = w 1 ⋅ I + w 2 ⋅ P 1 (7)

where w 1 and w 2 are coefficients to be defined.

This formulation is similar to the single spatial-detail (SSD) model given by Garzelli et al. for image fusion [

HRMS i = LRMS i + γ i ⋅ PAN (8)

where γ i is the parameter to be estimated. While, our model is used to depict the relationship among new intensity image, original intensity image and histogram matched PAN image. Secondly, in SSD model, parts of spatial information in PAN image are added into the low resolution MS images to get the high resolution MS images, which is necessary to enhance the spatial structure in the high resolution MS images. Whereas, in our model, the new intensity image I ˜ is estimated as the linear combination of intensity image I and the histogram matched PAN image, which is better than the situation in which I is added or replaced totally by PAN image, due to the fact that the spatial information in intensity image will be lost.

To estimate the parameters w 1 and w 2 , we employ the least-square criteria, i.e., to minimize the following object function:

min = ‖ I ˜ − w 1 ⋅ I − w 2 ⋅ P 1 ‖ 2 2 (9)

where ‖ ⋅ ‖ 2 2 denote the square of 2-norm of a vector.

Least-square solution can be deduced by calculating the partial derivative, and the solution can be expressed as

[ w 1 w 2 ] = [ P 1 T ⋅ P 1 P 1 T ⋅ I P 1 T ⋅ I I T ⋅ I ] − 1 ⋅ [ I ˜ T ⋅ P 1 I ˜ T ⋅ I ] (10)

where M T denote the transpose matrix of matrix M, and M − 1 denotes the inverse matrix of matrix M.

To give a better fusion result, the optimized calculation can be implemented in the sliding window which has the size of 3 × 3 , i.e., the parameters should be estimated in each non-overlapped sliding window. The proposed algorithm include the following three steps:

Step 1: Let I ˜ 0 = I + P 1 be the initialized matrix of new intensity image; Iteration Times, and Tolerance α ;

Step 2: Calculate the parameters w 1 and w 2 using Equation (10); Estimate intensity image I ˜ 1 using Equation (7);

Step 3: if ‖ I ˜ 1 − I ˜ 0 ‖ 2 < α , output I ˜ 1 as the estimated intensity image; otherwise, go back to Step 2.

The QuickBird images are downloaded from http://www.digitalglobe.com. DigitalGlobe company provide commercial satellite QuickBird images, which contain one 0.6 m spatial resolution panchromatic image (450 - 900 nm) and four 2.4 m MS images: blue band (450 - 520 nm), green band (520 - 600 nm), red band (630 - 690 nm) and near infrared band (760 - 900 nm). A subset images which has the size of 387 × 390 are cut from the original QuikBird images and are used as the experiment images. The MS images have been resampled to the same pixel size of PAN image. The experiment images are shown in

The second remote sensed images used in this paper is Landsat ETM+ images,

which is downlodad at https://www.usgs.gov/. Panchromatic image of ETM+ sensor has the spatial resolution of 15 meter, while the multi-spectral bands have the spatial resolution of 30 meter. So, it is necessary to merge the abund spectral information in the mulit-spectral images into the panchromatic image to get the high resolution multi-spectral images. The images are shown in

To give an objective assessment, correlation coefficients are used to assess the spectral distortion between the fused MS images and the up-sampled MS images, due to the shortage of original high resolution MS images. Correlation coefficient is defined as

c c f , g = 1 M × N ∑ i = 1 M ∑ j = 1 N ( f ( i , j ) − μ f ) ( g ( i , j ) − μ g ) ( ∑ i = 1 M ∑ j = 1 N ( f ( i , j ) − μ f ) 2 ) ( ∑ i = 1 M ∑ j = 1 N ( g ( i , j ) − μ g ) 2 ) (11)

Correlation coefficient measure the similarity degree of the same spectral band between fused image and original image. Its value should be as close to 1 as possible.

Another index is ERGAS (Erreur Relative Globale Adimensionnelle de Synthese) [

ERGAS = 100 h l 1 N ∑ i = 1 N RMSE 2 ( n i ) ( n ˜ i ) 2 (12)

where h is the resolution of PAN image, l is the resolution of MS image, n ˜ i is the mean radiance of each spectral band, RMSE is the root mean square error calculated using

RMSE ( n i ) = ∑ j = 1 N P ( O j − F j ) 2 (13)

where NP is the total number of pixels in the original and fused image, O j and F j is the radiance value of pixel j in the ith band of the original image and the fused image, respectively. ERGAS is used to assess the spectral quality in the fused image, and the lower the value of the ERGAS, the higher the spectral quality of the merged image [

The outputs of applying different fusion methods to QuickBird images are shown in

It also can be found that the results from Brovey fusion, PCA fusion and fast IHS fusion, are severely disturbed by spectral distortion, which can be testified using the correlation coefficients between the fused MS images and the up-sampled MS images in

It can be found that the fast IHS results display higher spectral distortion compared with the results derived from hybrid IHS. The reason of spectral distortion of fast IHS has been investigated by Tu [

In addition to the visual inspection, the performance of these two methods is further quantitatively analyzed using the assessment indexes. Firstly, the correlation coefficients verified that results from hybrid IHS have higher similarity to the original MS images compared with the results from fast IHS. Little spectral distortion emerged in the results of the proposed method, which can be seen by visual investigation. There is major difference in RMSE and ERGAS between the results derived from different methods. Results from HIHS have smaller RMSE and ERGAS than that of results from fast IHS, which demonstrate that hybrid IHS’s results have higher quality. Correlation coefficient to PAN of the hybrid IHS is less than that of fast IHS’s result, which demonstrates that there is short-

Correlation Coefficients | RMSE | Cc to PAN | ERGAS | |||||
---|---|---|---|---|---|---|---|---|

R | G | B | R | G | B | |||

Brovey | 0.7028 | 0.4187 | 0.2718 | 0.0800 | 0.1213 | 0.0774 | 0.9227 | 14.4306 |

Wavelet | 0.8152 | 0.8661 | 0.6557 | 0.4315 | 0.2992 | 0.4673 | 0.8839 | 70.7303 |

PCA | 0.4128 | 0.4806 | 0.4145 | 0.0576 | 0.0659 | 0.0702 | 0.9709 | 10.6170 |

Fast IHS | 0.4995 | 0.6156 | 0.2395 | 0.1123 | 0.1123 | 0.1123 | 0.9889 | 18.5688 |

Hybrid IHS | 0.9246 | 0.8352 | 0.7811 | 0.0233 | 0.0394 | 0.0258 | 0.7456 | 4.5545 |

age of spatial information in the results of hybrid IHS compared with that of fast IHS.

In this subsection, the proposed hybrid IHS method together with other fusion methods are used to fuse the MS images and panchromatic image taken from Landsat ETM+ sensor. The fused Landsat ETM+ images are shown in

Correlation Coefficients | RMSE | Cc to PAN | ERGAS | |||||
---|---|---|---|---|---|---|---|---|

R | G | B | R | G | B | |||

Brovey | 0.9447 | 0.8373 | 0.6344 | 0.2081 | 0.1426 | 0.1827 | 0.9146 | 18.8694 |

Wavelet | 0.9934 | 0.9797 | 0.9259 | 0.1335 | 0.1747 | 0.1339 | 0.8298 | 16.99 |

PCA | 0.8086 | 0.6598 | 0.2865 | 0.0366 | 0.0216 | 0.0393 | 0.9915 | 3.4448 |

Fast IHS | 0.9725 | 0.7522 | 0.6464 | 0.0574 | 0.0574 | 0.0574 | 0.8737 | 6.3127 |

Hybrid IHS | 0.9835 | 0.9669 | 0.9187 | 0.0111 | 0.0081 | 0.0103 | 0.8597 | 1.0504 |

other methods. Spatial information in the fused images had been increased in some degree. However, significant spectral distortion emerged in the fused results of Brovey method, PCA method and fast IHS fusion method.

To give a throughout investigation of the proposed hybrid IHS fusion method, different indexes are used to assess the performance of these methods (

In this paper, we give a hybrid of IHS and Minimum Mean-Square-Error for fusing low resolution multi-spectral and Panchromatic images from same scene. IHS is one of the commonly used fusion algorithms to merge the spatial information in PAN image and spectral information in LRMS images. However, spectral distortion phenomenon in IHS method seriously deteriorates the quality of the fused images. Reason of spectral distortion is due to the process of adding the difference between PAN image and intensity image directly into the original RGB images. To avoid or mitigate the influence of pixels that has bigger value compared with the ordinary pixels in the difference image, MMSE model is utilized to estimate the new intensity image from PAN and intensity images.

QuickBird PAN image and LRMS images are fused to evaluate the performance of our proposed algorithm. FIHS are used as reference to analyze the results from HIHS method. The comparison confirms that results from HIHS preserve most of spectral information with little spectral distortion, while results from FIHS have significant spectral distortion which has worse fusion quality. Therefore, the proposed hybrid method outperforms the commonly used FIHS method by providing higher quality fusion results.

Ding, H.Y. and Shi, W.Z. (2017) A Novel Hybrid Pan- Sharpen Method Using IHS Transform and Optimization. Advances in Remote Sensing, 6, 229-243. https://doi.org/10.4236/ars.2017.63017