Journal of Computer and Communications
Vol.05 No.12(2017), Article ID:79734,14 pages
10.4236/jcc.2017.512004

Remote Sensing Image Fusion Using Bidimensional Empirical Mode Decomposition and the Least Squares Theory

Dengshan Huang1,2, Peng Yang1, Jun Li1, Changhui Ma1

1College of Civil Engineering and Mechanics, Xiangtan University, Xiangtan, China

2College of Mathematics and Computational Science, Xiangtan University, Xiangtan, China

Copyright © 2017 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution-NonCommercial International License (CC BY-NC 4.0).

http://creativecommons.org/licenses/by-nc/4.0/

Received: August 31, 2017; Accepted: October 17, 2017; Published: October 20, 2017

ABSTRACT

Due to the data acquired by most optical earth observation satellite such as IKONOS, QuickBird-2 and GF-1 consist of a panchromatic image with high spatial resolution and multiple multispectral images with low spatial resolution. Many image fusion techniques have been developed to produce high resolution multispectral image. Considering panchromatic image and multispectral images contain the same spatial information with different accuracy, using the least square theory could estimate optimal spatial information. Compared with previous spatial details injection mode, this mode is more accurate and robust. In this paper, an image fusion method using Bidimensional Empirical Mode Decomposition (BEMD) and the least square theory is proposed to merge multispectral images and panchromatic image. After multi-spectral images were transformed from RGB space into IHS space, next I component and Panchromatic are decomposed by BEMD, then using the least squares theory to evaluate optimal spatial information and inject spatial information, finally completing fusion through inverse BEMD and inverse intensity-hue-saturation transform. Two data sets are used to evaluate the proposed fusion method, GF-1 images and QuickBird-2 images. The fusion images were evaluated visually and statistically. The evaluation results show the method proposed in this paper achieves the best performance compared with the conventional method.

Keywords:

Remote Sensing, Image Fusion, Bidimensional Empirical Mode Decomposition, The Least Squares Theory

1. Introduction

In the remote sensing satellite system, data storage amount and data transmission speed are limited, thus the spectral resolution of the image data usually is inversely proportional to the spatial resolution; the spectral resolution of the multispectral (MS) images is low; and the spatial resolution is high on the contrary; the spatial resolution of panchromatic (Pan) image is high; while the spectral resolution is low, for instance Landsat, IKONOS, and QuickBird-2. Many applications such as map updating, land-use classification, natural disaster and environment monitoring require remote sensing images with both high spectral resolution and spatial resolution. High spatial resolution information can allow accurate geometric analysis, while high spectral resolution information is essential for the objective interpretation of objects [1] [2] [3] . So it is necessary to merge MS image with high spectral resolution and Pan with high spatial resolution to produce an image with both high spectral and spatial resolution [4] [5] [6] .

Many remote sensing image fusion methods have been developed in the last few years [7] . In general, these methods could be grouped into two classes. The first class is based on spatial domain transform and component substitution, such as the intensity-hue-saturation (IHS) transform, principal component analysis (PCA) and Gram-Schmidt transform. This kind of method’s fusion mode is that the multi-spectral images are transformed into spatial domain, and then, panchromatic image is used to replace a component transformed to complete the fusion. The characteristics of this kind of method are small computational amount, fast fusion speed, but fusion image will show obvious spectral distortion. The second class fusion methods are based on multi-resolution analysis, including fast fourier transform (FFT) [8] , wavelet transform (WT) [9] [10] , contour let transform and so on. The fusion result of this kind of method is better than the first kind of method for spectral preserving and spatial detail injecting on the whole, but the calculation is relatively complicated and time-consuming. There are still a few gaps between the fusion image and the ideal standard image [11] .

Empirical Mode Decomposition (EMD) [12] is a new signal processing method relative to the FFT and WT. FFT and WT could be applied to process linear, stationary signal, but EMD could adaptively decompose linear or nonlinear signal into a set of functions called Intrinsic Mode Function (IMF) and residue. It has been applied to many applications in image processing, such as image compression, image analysis. Bidimensional Empirical Mode Decomposition (BEMD) [13] is EMD’s generalization in two-dimensional space, and has been applied to merge MS images and Pan image [14] [15] [16] . Liu et al. [14] utilized BEMD to decompose MS images and Pan image into IMFs and residue respectively, and then substituted the MS images’ IMFs with pan image’s IMFs, and finally added all IMFs of pan image with residue of MS images respectively to construct fusion images. In order to increase the computing speed, Dong et al. [16] applied IHS transform to calculate intensity component of MS images, and decomposed intensity component and Pan image via BEMD, and then reconstructed new intensity component through substitution of intensity component’s IMFs with IMFs of Pan image, and finally completed image fusion via inverse IHS transform. Although above methods get good results, spatial details were injected using mode of intensity component substitution or high-frequency information substitution. The spatial information of MS images was abandoned at the same time. It will result in loss of information and reduce reliability of fusion results. In this paper, we propose a fusion method to overcome the above shortcomings. After MS images are IHS transformed, the I component and Pan image would be decomposed into IMFs and residue respectively. Based on the spatial resolution ratio of pan image and MS image, we use the least squares principle to evaluate the optimal spatial information of fused images, and then through inverse BEMD and inverse IHS transform to complete remote sensing images fusion, thus the fused image has the same spatial resolution as the panchromatic image while maintaining the spectral characteristics of the multispectral image, and without information loss in image fusion processing.

2. Related Work

2.1. IHS Transform

Image fusion technique based on IHS transform is a representative method, which has been widely used to merge MS images and pan image. Due to its fast implementation and high efficiency, it has been integrated into some remote sensing image processing software, such as ENVI, ERDAS. It can convert a color image from RGB space to IHS space, the transformation models include sphere transformation, cylinder transformation, triangle transformation and hexagonal cone transformation and so on [17] [18] . The formula adopted in this paper is as follows,

( I V 1 V 2 ) = ( 1 3 1 3 1 3 2 6 2 6 2 6 1 2 1 2 0 ) ( R G B ) (1)

H = tan 1 ( V 2 V 1 ) S = V 1 2 + V 2 2 (2)

Inverse transformation formula is,

( R G B ) = ( 1 1 2 1 2 1 1 2 1 2 1 2 0 ) ( I V 1 V 2 ) (3)

2.2. Bidimensional Empirical Mode Decomposition

EMD is a new signal analysis method [12] , which is different with FFT and WT based on fixed basis function, it based on the data itself time scale characteristic to signal decompose, without using a predefined basis function. The EMD decomposes a signal into a series of Intrinsic Mode Functions (IMFs) and a residue, and IMFs reflects the local characteristics of the original signal at different time scales, which is similar to the high frequency component in the wavelet transform, the residue is similar to the low frequency component. In theory, the EMD method can be applied to the decomposition of any type of signal including linear, nonlinear, stationary and non-stationary signals.

Bidimensional Empirical Mode Decomposition (BEMD) is an expansion of one dimensional EMD in two dimensional space [12] [13] . In general, an image is represented as f ( x , y ) ( x = 1 , 2 , , M ; y = 1 , 2 , , N ) , the details of the BEMD are as follows:

1) Initializing. Letting r 0 , 0 = f ( x , y ) , r l , k represents the kth iteration process sifting for the lth residue.

2) Calculate IMFs. h l , k denotes the lth IMF sifted after k iteration process. Details of calculation IMFs are as follows:

a) Letting h l , k = r l , k .

b) Finding all local maxima and minima of h l , k , implements surface interpolation using maxima and minima, and obtain a maxima envelope surface u max ( x , y ) and minima envelope surface u min ( x , y ) .

c) Calculating average envelope surface.

m e a n l , k ( x , y ) = ( u max ( x , y ) + u min ( x , y ) ) / 2 (4)

d) Calculating h l , k 1 .

h l , k 1 ( x , y ) = h l , k m e a n l , k ( x , y ) (5)

e) Calculating standard deviation (SD) of stopping iteration condition. If SD < ε ( ε is a predefined threshold (often 0.2 - 0.3)), then we can consider h l , k is the lth BIMF, otherwise, repeat steps (b) to (e) until the stop condition is satisfied.

S D = x = 0 M 1 y = 0 N 1 | h l , k ( x , y ) h l , k 1 ( x , y ) | h l , k 2 ( x , y ) (6)

3) Calculating Residue.

r i ( x , y ) = r i 1 ( x , y ) B I M F i ( x , y ) (7)

If r i ( x , y ) includes more than K extreme point, then go to step (1) to continue decompose until the number of r i ( x , y ) ’s extreme point is less than K.

4) Finally, the f ( x , y ) can be presented as follows:

f ( x , y ) = i = 1 n B I M F i ( x , y ) + r n ( x , y ) (8)

2.3. Bidimensional Empirical Mode Decomposition

Previous studies [10] [19] have shown that using IHS transform and multi-scale analysis jointly to fuse MSI images and pan image is better than using IHS transform or multi-scale analysis separately, and could reduce the amount of computation to improve the working efficiency. This model would be used in this paper. But injection of spatial details is proposed to use the least square method. In general, spatial information is considered to be included in high frequency component of data, while spectral information is included in data’s low frequency component. Since the MS images and the Pan image have the same imaging mechanism, although the spectral information of the images is different, the spatial information contained in the image is the same. So the spatial information in MSI and Pan forms a set of observation values of unequal precision, an optimal evaluation of spatial information can be calculated via the least square method. The main idea is as follows:

1) Assume that the vector Y does not vary with time, multiple observations are made to obtain its true values,

Y = C X + V (9)

where Y is a vector which size is m × 1 , C is an observation matrix which size is m × n , X is an observation value and V is an observation error which it’s mean is zero.

2) In the case that the number of observation equation is more than the number of unknown variables, the optimal estimate of the true value X can be expressed as follows:

{ X ¯ L S W = ( C T R 1 C ) 1 C T R 1 Y R = V a r [ V ] (10)

where V a r [ V ] is the observation error matrix.

3) Assume three MS images and a Pan image which spatial resolution ratio are n : 1 are used to produce fusion images, the observation error of panchromatic spatial information is r , and the corresponding multispectral image observation error is n r . The parameter of Formula (9) is as follows,

Y = C X + V

Y = [ Y 1 Y 2 Y 3 Y 4 ] , C = [ 1 1 1 1 ] T

R = [ r 2 0 0 0 0 ( n r ) 2 0 0 0 0 ( n r ) 2 0 0 0 0 ( n r ) 2 ]

Then the optimal evaluation of true X is,

X ¯ L S W = ( C T R 1 C ) 1 C T R 1 Y = n 2 n 2 + 3 Y 1 + 1 n 2 + 3 Y 2 + 1 n 2 + 3 Y 3 + 1 n 2 + 3 Y 4 (11)

The detailed fusion steps are as follows, and Figure 1 is the Processing flow of the proposed BEMD + IHS fusion method.

Figure 1. The image fusion scheme based on the proposed BEDM.

1) Geometric registration of the images to be fused.

2) Transform MS images into Intensity, hue and saturation component by IHS transform.

3) Apply histogram matching between intensity component and pan image, and produce a new pan image.

4) Decompose intensity component and new pan image using BEMD method, and obtain IMFs and reduce component corresponding.

5) Take IMFs into Formula (11) to calculate optimal IMFs which is optimal spatial information evaluation.

6) Use Formula (8) to reconstruct new I component.

7) Perform Inverse IHS transform to complete fusion.

3. Experiments and Analysis

3.1. Experiments

In this paper, we choose two sets of data that comes from different sensor and different region to verify the effectiveness of the fusion method, and the fusion result of the proposed method is compared with the fusion result of DWT- and BEMD-based fusion method [14] [16] [19] . The first set of data is the QuickBird-2 image in a region of Zhangjiajie city, Hunan province of China. The size of the Pan image is 1024 × 1024, the spatial resolution is 0.61 m, the size of MS images is 512 × 512, and the spatial resolution is 2.44 m. The second set of data is GF-1 satellite image in a region of Xiangtan city, Hunan province of China. The size of the Pan image is 1024 × 1024, the spatial resolution is 2 m, the size of MS images is 512 × 512, and the spatial resolution is 8 m. In order to evaluate the fusion results with real images, the images are down-sampled and filtered by smoothing filter to complete the degradation processing, the size of the filter is the same as the ratio of image resolution. The resolution ratio of QuickBird-2 multispectral and panchromatic images is 4:1, and the smoothing filter is as follows. Thus fused images should have the same spectral and spatial features as the original MS images, and compare them to facilitate the evaluation of the fusion effects between different fusion methods. The QuickBird-2 and GF-1 Pan images are degraded to 2.4 m and 8 m spatial resolution, and the corresponding MS images are degraded to 9.8 m and 32 m spatial resolution.

1 16 | 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 | (12)

Figure 2 and Figure 3 are experimental data and fusion results of various methods. Among them, Figure 2(a) and Figure 3(a) are the degraded multispectral images, and Figure 2(b) and Figure 3(b) are original multispectral images, and the standard image for evaluating the fusion effect of fusion images as well. Figure 2(c), Figure 2(d), Figure 3(c), Figure 3(d) are the fusion result based on wavelet transform using different spatial details injection model [19]

Figure 2. The fusion result of GF-1 multi-spectral and panchromatic images. (a) Degraded multispectral images; (b) Original multispectral images; (c) DWT + IHS method; (d) DWT + HIS + LS method; (e) BEMD + IHS method; (f) The method proposed in this paper.

Figure 3. The fusion result of QuickBird-2 multi-spectral and panchromatic images (a) Degraded multispectral images; (b) Original multispectral images; (c) DWT + IHS method; (d) DWT + HIS + LS method; (e) BEMD + IHS method; (f) The method proposed in this paper.

[20] . Figure 2(e), Figure 3(e) are directly based on BEMD+IHS transform image fusion [14] . The fusion procedure is that IHS transform is applied to multispectral images firstly, next the I component and panchromatic image were decomposed by BEMD, and then use the high frequency components of Pan is used to replace high frequency components of the I component, finally integrate the new I components are integrated and implement inverse IHS transform to complete image fusion. Figure 2(f) and Figure 3(f) are the fusion images using the method proposed in this paper.

As show in Figure 2, the spatial resolution of the fusion images in Figures 2(c)-(f) has been greatly improved over the multispectral image degraded in Figure 2(a). The clarity of spatial entities is even better than original image in Figure 2(b). The building boundaries of the region A in Figure 2(b) are not obvious, but in the fusion images the boundaries are very obvious. The region B in the image can only be identified as green vegetation, we could not to distinguish its specific species in Figure 2(b), but we can easy to determine the characteristics of its rice fields in fusion images. And it can be found in Figure 2(c) and Figure 2(d) the boundary and texture of entities are much clearer than that in Figure 2(e) and Figure 2(f), the clarity of spatial entities in Figure 2(e) and Figure 2(f) is much closer to the standard reference image in Figure 2(b). It shows the spatial details would be injected excessively into fusion images via the method based on wavelet transform. In terms of the images’ color, it denotes the spectral characteristics of images, all the fusion images in Figure 2 and the reference image Figure 2(b) are quite similar, but under careful observation, it can be found that the color of the blue buildings and the red buildings in the fusion images are lighter than those in the reference image Figure 2(b), and it means the spectral information loss occurs. This phenomenon is due to the original image is up-sampled during the fusion processing, only spatial detail information is injected, and no spectral information is supplemented, thus Compared with reference images, fused images will lack some spectral information. This reason also results in the lack of spectral information in fusion image compared with the image acquired by the same level parameter sensor [11] . So it is a task that could not be fulfilled perfectly that fusion image has the same spectral information as the reference image. It is obvious among Figures 2(c)-(f), the color of region A (building), region B (vegetation) and region C (water) in Figure 2(e) and Figure 2(f) is more closer to the reference images. It shows that the BEMD based fusion method is superior to the wavelet based method in preserving spectral properties. But there is no obvious difference in visual perception between Figures 2(c)-(f). It shows that the influence of the spatial detail injection mode of substitution or least square in the fusion image is not obvious.

Due to the spatial resolution of quickbird-2 image in Figure 3 is better than that of GF-1 image in Figure 2, the change of spatial details and color in the fusion images is more obvious. The water in region A is not a obvious gray areas in the reference image (b), and it has become a black area obviously in Figure 3(c) and Figure 3(d), the color and spatial detail in Figure 3(e) and Figure 3(f) are between Figure 3(b) and Figure 3(c). There is a blue building in region B of reference image Figure 3(b), it becomes gray in Figure 3(c) and Figure 3(d), and in Figure 3(e) and Figure 3(f), it appear to be baby blue. The C region is a blurred and dark green vegetation area in the reference Figure 3(b), and it is indistinguishable from the surrounding vegetation, but it becomes a clear pale green vegetation area distinguished easily from the surrounding vegetation in Figure 3(c) and Figure 3(d), the color and spatial detail in Figure 3(e) and Figure 3(f) are between Figure 3(b) and Figure 3(c).

3.2. Quantitative Assessment

In terms of objective evaluation, we selects the correlation coefficient (CC) and the distortion degree (DD) to evaluate the spectral properties of the fusion image relative to the reference image, selects the high frequency correlation coefficient (HFCC) [21] and the structural similarity Index Measurement (SSIM) [22] to evaluate the similarity between the spatial structure information of the fused image and that of the reference image.

Correlation coefficient (CC) reflects the similarity of spectral properties of two images. If the correlation coefficients tend to be 1, the spectral properties of the two images are close to each other, otherwise the opposite. The Correlation coefficients can be calculated as follows:

CC = i = 1 M j = 1 N [ X ( x i , y i ) X ¯ ] [ Y ( x i , y i ) Y ¯ ] i = 1 M j = 1 N [ X ( x i , y i ) X ¯ ] i = 1 M j = 1 N [ Y ( x i , y i ) Y ¯ ] (13)

where X ¯ and Y ¯ are means of image X and image Y.

The distortion degree (DD) indicates the deviation of the fused image relative to the reference image. The smaller the distortion degree, the closer the spectral information between two images is. The distortion degree can be calculated as follows:

DD = 1 M N i = 1 M j = 1 N | X ( x i , y j ) Y ( x i , y j ) | (14)

The high frequency correlation coefficient (HFCC) presents the similarity of the high frequency information of the fusion image and that of the reference image. The main idea is based on the fact the most spatial information in image is concentrated in the high frequency information. If the high frequency correlation coefficients tend to be 1, the spatial structure of the images is close to each other. The filter which we used to extract high frequency of image is Laplace mask as follows:

[ 1 1 1 1 8 1 1 1 1 ] (15)

The structural similarity Index Measurement (SSIM) is a comprehensive index which takes into account the gray correlation, the gray deviation and the image contrast of the image. The closer the value is to 1, the closer the two images are. The SSIM can be calculated as follows:

SSIM = 4 σ X Y μ X μ Y ( σ X 2 + σ Y 2 ) [ ( μ X ) 2 + ( μ Y ) 2 ] (16)

where σ X Y is the covariance between image X and image Y, μ X and μ Y are the means, σ X and σ Y are variance of image X and image Y.

Table 1 and Table 2 are the objective evaluation factors for the fusion of two sets of MS images and Pan image.

From Table 1, it can be seen that the correlation coefficient of fusion results based on BEMD and reference images are greater than the correlation coefficient of the fusion results based on discrete wavelet transform, distortion are less than the fusion results based discrete wavelet transform, it shows that the fusion method based on BEMD decomposition are superior to the fusion method based on wavelet transform in spectral preservation. In addition, spatial details injection model of DWT + HIS + LS and method proposed in this paper is least square evaluation, compared to the DWT + HIS and BEMD + IHS method which directly use the panchromatic image’s high frequency information to replace the I

Table 1. Objective evaluation index of fusion images between GF-1 images.

Table 2. Objective evaluation index of fusion images between QuickBird-2 images.

component’s high frequency information, their fusion images’ correlation coefficient is more larger, and the distortion is more smaller. It shows the least square evaluation is a more efficient model than the model which the high frequency information is replaced directly in spectral preservation.

The correlation coefficients of high frequency of the fusion images based on the method proposed in this paper are slightly higher than the corresponding index of the image using other three methods, the index of image based on other three methods are basically close, just the method based on wavelet transform, it is not efficient for improving high frequency correlation coefficient to inject spatial details using least square method. The ranking of SSIM indexes is the same as the ranking of correlation coefficients and distortions. The order of the two kinds of indexes shows that the fusion image based on the method proposed in this paper is closer to the reference image in spectral information and spatial information.

The index’s distribution in Table 2 is basically consistent with the distribution of index in Table 1. The fusion image based on the method proposed in this paper is the most closest to the reference image on four indexes. Combining with the comparative analysis visually, it can be found that the image fusion method based on BEMD is more effective than the image fusion method based on wavelet transform in preserving spectral and spatial information integration. In terms of space details injection, relative to model using high frequency substitution, the image using the least square method to integrate space details is closer to the reference image in space structure.

4. Conclusion

In order to make fusion image contain the same spectral information and spatial information as the image obtained using the same parameter sensor, this paper proposes a fusion method based on BEMD + IHS to merge MS images and Pan image. Simultaneously, considering spatial information in MSI and Pan forms a set of observation values of unequal precision, an optimal evaluation of spatial information can be calculated via the least square method. Two sets of data, GF-1 images and QuickBird-2 images, are used to verify the fusion method. After visual analysis and objective evaluation, experimental result shows that the fusion results based on BEMD proposed in this paper are much closer to the reference image than fusion results based on wavelet transform, and the fused images using least square method to integrate spatial detail are much closer to the reference image than fusion image using panchromatic image’s high frequency information to substitute I component’s high frequency information. In addition, the fused image using substitution model to inject spatial details exists the phenomenon that spatial information is injected overly.

Acknowledgements

This research is supported by Natural Science Foundation of Hunan Province (14JJ7039) and Educational Commission of Hunan Province (14C1071), China.

Cite this paper

Huang, D.S., Yang, P., Li, J. and Ma, C.H. (2017) Remote Sensing Image Fusion Using Bidimensional Empirical Mode Decomposition and the Least Squares Theory. Journal of Computer and Communications, 5, 35-48. https://doi.org/10.4236/jcc.2017.512004

References

  1. 1. Hedhli, I., Moser, G., Zerubia, J. and Serpico, S.B. (2014) Fusion of Multitemporal and Multiresolution Remote Sensing Data and Application to Natural Disasters. International Geoscience and Remote Sensing Symposium, Québec, July 2014, 207-210.

  2. 2. Pereira, O.J.R., Melfi, A.J. and Montes, C.R. (2017) Image Fusion of Sentinel-2 and CBERS-4 Satellites for Mapping Soil Cover in the Wetlands of Pantanal. International Journal of Image & Data Fusion, 8, 1-25. https://doi.org/10.1080/19479832.2016.1261946

  3. 3. Padula, F., Macdonough, J., Bondy, D., et al. (2010) Automated Supervised Land Use Classification and Change Detection: An Image Fusion Based Approach. IEEE International Geoscience & Remote Sensing Symposium, Honolulu, HI, 25-30 July 2010, 335-338. https://doi.org/10.1109/IGARSS.2010.5651950

  4. 4. Ghassemian, H. (2016) A Review of Remote Sensing Image Fusion Methods. Elsevier Science Publishers B. V., 75-89.

  5. 5. Wald, L. (1999) Some Terms of Reference in Data Fusion. IEEE Transactions on Geoscience and Remote Sensing, 40, 1190-1193. https://doi.org/10.1109/36.763269

  6. 6. Pohl, C. and van Genderen, J.L. (1998) Multisensor Image Fusion in Remote Sensing: Concepts, Methods and Applications. International Journal of Remote Sensing, 19, 823-854. https://doi.org/10.1080/014311698215748

  7. 7. Nikolakopoulos, K.G. (2008) Comparison of Nine Fusion Techniques for Very High Resolution Data. Photogrammetric Engineering & Remote Sensing, 74, 647-659. https://doi.org/10.14358/PERS.74.5.647

  8. 8. Ling, Y.R., Manfred, E., Lynn, U.E. and Madden, M. (2007) FFT-Enhanced IHS Transform Method for Fusing High-Resolution Satellite Images. ISPRS Journal of Photogrammetry and Remote Sensing, 61, 381-392. https://doi.org/10.1016/j.isprsjprs.2006.11.002

  9. 9. Yocky, D.A. (1996) Multiresolution Wavelet Decomposition Image Merger of Landsat Thematic Mapper and SPOT Panchromatic Data. Photogrammetric Engineer & Remote Sensing, 62, 1067-1074.

  10. 10. Krista, A., Zhang, Y. and Peter, D. (2007) Wavelet Based Image Fusion Techniques: An Introduction, Review and Comparison. ISPRS Journal of Photogrammetry and Remote Sensing, 62, 249-263. https://doi.org/10.1016/j.isprsjprs.2007.05.009

  11. 11. Ranchin, T. and Wald, L. (2000) Fusion of High Spatial and Spectral Resolution images: The ARSIS Concept and Its Implementation. Photogrammetric Engineer & Remote Sensing, 66, 49-61.

  12. 12. Huang, N.E., Shen, Z., Long, S.R., et al. (1998) The Empirical Mode Decomposition and the Hilbert Spectrum for Nonlinear and Non-Stationary Time Series Analysis. Proceedings of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences, 454, 903-995. https://doi.org/10.1098/rspa.1998.0193

  13. 13. Nunes, J.C.Y., Bouaoune, E.D., et al. (2003) Image Analysis by Bidimensional Empirical Mode Decomposition. Image & Vision Computing, 21, 1019-1026.

  14. 14. Liu, Z., Song, P., Zhang, J., et al. (2007) Bidimensional Empirical Mode Decomposition for the Fusion of Multispectral and Panchromatic Images. International Journal of Remote Sensing, 28, 4081-4093. https://doi.org/10.1080/01431160601075483

  15. 15. Shi, W., Yan, T., Ying, H., et al. (2009) A Two-Dimensional Empirical Mode Decomposition Method with Application for Fusing Panchromatic and Multispectral Satellite Images. International Journal of Remote Sensing, 30, 2637-2652. https://doi.org/10.1080/01431160802555770

  16. 16. Dong, W., Xian, E.L., Lin, X., et al. (2014) A Bidimensional Empirical Mode Decomposition Method for Fusion of Multispectral and Panchromatic Remote Sensing Images. Remote Sensing, 6, 8446-8467. https://doi.org/10.3390/rs6098446

  17. 17. Schetselaar, E.M. (1998) Fusion by the IHS Transform: Should We Use Cylindrical or Spherical Coordinates? International Journal of Remote Sensing, 19, 759-765. https://doi.org/10.1080/014311698215982

  18. 18. Jia, Y., Li, D., Sun, J., et al. (1998) Comparison of IHS Transformation for Integrating SAR and TM Images. Journal of Remote Sensing, 2, 103-106.

  19. 19. Zhang, Y. and Hong, G. (2005) An IHS and Wavelet Integrated Approach to Improve Pan-Sharpening Visual Quality of Natural Colour IKONOS and QuickBird Images. Information Fusion, 6, 225-234.

  20. 20. Huang, D., Yang, M., Xu, H., et al. (2011) Fusion of Multi-Spectral and Panchromatic Images using Optimal Estimation Theory. Geomatics & Information Science of Wuhan University, 36, 1039-1042.

  21. 21. Zhou, J., Civco, D.L. and Silander, J.A. (1998) A Wavelet Transform Method to Merge Landsat TM and SPOT Panchromatic Data. International Journal of Remote Sensing, 19, 743-757. https://doi.org/10.1080/014311698215973

  22. 22. Wang, Z. and Bovik, A.C. (2002) A Universal Image Quality Index. IEEE Signal Processing Letters, 9, 81-84. https://doi.org/10.1109/97.995823