Next Article in Journal
An Overview of Small Unmanned Aerial Vehicles for Air Quality Measurements: Present Applications and Future Prospectives
Previous Article in Journal
Structural Parameters Calibration for Binocular Stereo Vision Sensors Using a Double-Sphere Target
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Pansharpening with a Guided Filter Based on Three-Layer Decomposition

1
School of Resource and Environmental Sciences, Wuhan University, Luoyu Road, Wuhan 430079, China
2
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Luoyu Road, Wuhan 430079, China
3
Beijing Advanced Innovation Center for Imaging Technology, Capital Normal University, Beijing 100048, China
4
Collaborative Innovation Center of Geospatial Technology, Luoyu Road, Wuhan 430079, China
*
Author to whom correspondence should be addressed.
Sensors 2016, 16(7), 1068; https://doi.org/10.3390/s16071068
Submission received: 12 May 2016 / Revised: 24 June 2016 / Accepted: 5 July 2016 / Published: 12 July 2016
(This article belongs to the Section Remote Sensors)

Abstract

:
State-of-the-art pansharpening methods generally inject the spatial structures of a high spatial resolution (HR) panchromatic (PAN) image into the corresponding low spatial resolution (LR) multispectral (MS) image by an injection model. In this paper, a novel pansharpening method with an edge-preserving guided filter based on three-layer decomposition is proposed. In the proposed method, the PAN image is decomposed into three layers: A strong edge layer, a detail layer, and a low-frequency layer. The edge layer and detail layer are then injected into the MS image by a proportional injection model. In addition, two new quantitative evaluation indices, including the modified correlation coefficient (MCC) and the modified universal image quality index (MUIQI) are developed. The proposed method was tested and verified by IKONOS, QuickBird, and Gaofen (GF)-1 satellite images, and it was compared with several of state-of-the-art pansharpening methods from both qualitative and quantitative aspects. The experimental results confirm the superiority of the proposed method.

Graphical Abstract

1. Introduction

With the rapid development of satellite sensors, remote sensing images have become widely used. In particular, images with both high spatial and spectral resolutions are highly desirable in various remote sensing applications, such as image classification, segmentation, object detection, etc. [1,2]. However, due to the technical limitations of the sensors and other imaging factors, such ideal images cannot be obtained directly [3]. Most Earth observation satellites, such as QuickBird, IKONOS, GeoEye-1, WorldView-2, etc., provide both high spatial resolution (HR) panchromatic (PAN) image with a low spectral resolution, and low spatial resolution (LR) multispectral (MS) image with a relative higher spectral resolution. The fusion process that makes full use of the complementary information from the PAN and MS images to produce HR MS image is referred to as pansharpening.
To date, a variety of pansharpening methods have been proposed. In general, most of the existing methods are based on a basic protocol, which can be summarized as: (1) determine the high spatial structure information, and it can be obtained from the PAN image by a tool such as a filter or other methods; and (2) inject the high spatial structure information into the MS image, based on a certain model. The image fusion methods based on this protocol can be sorted into several basic categories: arithmetic combination (AC)-based fusion methods, component substitution (CS)-based fusion methods, and multiresolution analysis (MRA)-based fusion methods. In addition, model-based fusion methods [4,5,6,7] have been developed in recent years; however, due to their complexity and time-consuming computations, these algorithms will not be discussed in detail in this paper.
Among the pansharpening methods described above, the AC-based fusion methods are the simplest. They are based on the arithmetic combination of the PAN and MS bands. The most representative are the Brovey fusion method [8] and the UNB-Pansharp fusion method [9], which has been successfully commercialized in the PCI Geomatica software. The CS-based algorithms are another popular pansharpening category; its basic idea is that the MS bands are firstly transformed into another new space with decorrelated components to reduce information redundancy, one of the components is then substituted by the HR PAN image to improve the spatial resolution of the MS image. The representative methods include the popular intensity-hue-saturation (IHS) fusion [10,11,12], the Gram-Schmidt (GS) fusion [13], principal component analysis (PCA) fusion [14], etc. In general, the AC-based fusion methods and the CS-based fusion methods can achieve the fused products with better spatial structures; however, they perform slightly poorer in the preservation of spectral information.
The MRA-based fusion methods are generally with relative less spectral distortions, though they are slightly sensitive to the spatial distortions. In general, they extract the high frequency information of the PAN image based on the wavelet transform [15,16,17,18] and the Laplacian pyramid [19,20], etc. In addition, the edge-preserving filters have been introduced into MRA-based image fusion algorithms [21,22,23,24]. In particular, the edge-preserving guided filter based fusion methods [25,26] have attracted an ever-increasing attention in recent years. To the best of our knowledge, Li et al. [25] were the first to introduce the guided filter into data fusion for multi-focus and multi-modal images, where the guided filter was used to construct the weight map between the layers of the source images. Joshi et al. [26] subsequently proposed an image fusion method using a multistage guided filter. However, most of the fusion algorithms using the edge-preserving filters decompose the PAN image into “low-frequency” (actually, the “low frequency” includes both the low-frequency and large-scale features) and detail information, without giving sufficient concern to the edge-preserving characteristics.
In this paper, a novel pansharpening method using a guided filter based on three-layer decomposition is proposed. The proposed algorithm is based on an MRA framework, and the PAN image is decomposed into a low-frequency layer, an edge layer, and a detail layer. The edge layer and the detail layer are then as the high spatial structures to be injected into the MS image by a proportional injection model. In addition, two new quantitative evaluation indices are developed.
The remainder of this paper is organized as follows. In Section 2, the guided filter is briefly reviewed. The proposed method is presented in Section 3. In Section 4, the experimental results and analyses are presented, and Section 5 concludes the paper.

2. Guided Filter

The guided filter is derived from a local linear model, it generates the filtering output by considering the content of a guidance image, and the guidance image can be either the input image itself or another different image. For convenience, we denote the guidance image as q , the input image as y , and the output image as O . The output image O is assumed to be a linear transformation of the guidance image q in a local window Ω k centered at pixel k :
O i = a k q i + b k        i Ω k
where ( a k , b k ) are linear coefficients, and i is the pixel location. It indicates that O = a k q , which ensures that the output image O has an edge only when q has an edge. a k and b k can be solved by minimizing the difference between y and O :
E ( a k , b k ) = i Ω k ( ( a k q i + b k y i ) 2 + ε a k 2 )
here, ε is the regularization parameter to prevent a k from being too large.
For convenience, the guided filter can be also represented as:
O i = j w i j ( q ) y j
here, i and j are pixel indices, w i j is a kernel function of the guidance image q , and it is independent of the input image y . It is expressed as follows:
w i j = ( 1 / | Ω | 2 ) k : ( i , j ) Ω k ( 1 + ( q i m k ) ( q j m k ) / ( δ k 2 + ε ) )
where m k and δ k 2 are the mean and variance of the guidance image in the window Ω k , respectively. After obtaining the kernel function, the output image O can be solved by Equation (3).

3. Proposed Method

3.1. Overview

The proposed pansharpening method is outlined in Figure 1. It is based on MRA framework by using the popular edge-preserving guided filter. In the proposed method, the PAN image is decomposed into three layers, i.e., the edge layer, the detail layer, and the low frequency layer, by considering the edge-preserving characteristics of the guided filter. The edge layer and detail layer are then injected into MS image by a proportional injection model [9,18,27,28,29]. The main processes of the proposed method are as follows:
(1)
The pixel values of the original MS and PAN images are normalized to 0–1 to strengthen the correlation of the MS bands and PAN image. Then, histogram matching of the PAN image to the intensity component is performed, and the intensity component is a linear combination of the bicubic resampling MS, denoted as M S , whose spectral responses is approximately covered by the PAN [7,30]. Here, the linear combination coefficients are calculated by original MS and the downsampled PAN image with least square regression [31].
(2)
The histogram-matched PAN image is decomposed into three layers, i.e., a strong edge layer E , a detail layer D , and a low-frequency layer L , based on three layer decomposition technique.
(3)
The edge layer E and the detail layer D are injected into each MS band by a proportional injection model to obtain the fused image. It is represented as: F b = M ˜ S b + W b ( u E + v D ) , where F b denotes the b-th band of the fused image, M ˜ S b is the anti-aliasing bicubic resampling MS image followed by guided filtering to suppress the spatial distortion, and here, the guidance image is the resampling MS image to preserve its original spectral information as much as possible. W b represents the b-band weight to determine the amount of high-frequency information to be injected, and it is represented as W b = M S b / I . The u and v are parameters to control the relative contribution of the edge layer and the detail layer, respectively.

3.2. Three-Layer Decomposition

The traditional pansharpening methods using edge-preserving filters generally decompose the PAN image into a “low-frequency” layer and a detail layer [21,22,23] by drawing from the way of traditional MRA-based fusion algorithms [15,16,17]; however, the decomposed “low-frequency” layer actually includes large-scale features. Bennett et al. [24] adopted a dual bilateral filter to fuse RGB and IR video streams, which decomposes the image into low frequencies, edges, and detail features. Inspired by this idea, a three-layer decomposition based on guided filter for pansharpening is proposed to split the PAN into a low-frequency layer, an edge layer, and a detail layer, as shown in Figure 2. The details are as follows:
(1)
Firstly, the guided filter is applied to decompose the histogram-matched PAN image into a base layer and a detail layer.
M = G P
where M is the base layer, in which the low frequency layer and the strong edge layer are included. P is the histogram-matched PAN image, and G denotes the guided filter. Here, the guidance image is consistent with the input image, i.e., the histogram-matched PAN. Once the base layer is obtained, the detail layer can be easily obtained by subtracting the base layer from the histogram-matched PAN image:
D = P M
where D denotes the detail layer.
(2)
Then the strong edges are separated from the base layer, by reason that although the detail layer is obtained, there are still strong edges remaining in the base layer, which can be clearly seen in Figure 2. It is represented as:
E = M g P
where E is the strong edge layer, g denotes the Gaussian low-pass filter, and the g P represents the low frequency layer of the PAN image.

4. Experimental Results and Analyses

In the experiments, several remote sensing satellite images including IKONOS, QuickBird, and GF-1 were utilized to comprehensively verify the effectiveness of the proposed method. In Wald’s [32] view, the synthetic image should be as similar as possible to the image that the corresponding sensor would observe at the highest spatial resolution; however, as there is no ideal reference images, the original PAN and MS images were firstly degraded to an inferior spatial resolution level by the ratio of the spatial resolution of the PAN and MS images, and then the original MS was treated as the reference image [7]. In addition, several state-of-the-art pansharpening methods were introduced for comparison, including Gram-Schmidt (GS) fusion method (implemented with ENVI 4.7, and GS1 was obtained by the average of the low-resolution MS files), principal component analysis (PCA) fusion method (implemented with ENVI 4.7), adaptive intensity-hue-saturation (AIHS) fusion method [33], and the additive wavelet luminance proportional (AWLP) method [18].

4.1. Quantitative Evaluation Indices

The proposed methods were verified from both qualitative and quantitative aspects. The qualitative evaluation involved analyzing the fused image directly from visual effects. To quantitatively analyze the fused image, several popular evaluation indices were used, i.e., the correlation coefficient (CC) [18,32], the spectral angle mapper (SAM) [34], the universal image quality index (UIQI) [35], the root-mean-square error (RMSE) [18], and the relative dimensionless global error in synthesis (ERGAS) [18,34,36]. In addition, two new quantitative evaluation indices, i.e., the modified correlation coefficient (MCC) and the modified universal image quality index (MUIQI), were developed in this paper, as shown in Table 1. Here, F denotes the fused image, R represents the reference image, and σ V ( F i , b = 1 ... B ) ( R i , b = 1 ... B ) denotes the covariance of the spectral bands in vector at the pixel position i . N 1 N 2 represents the spatial dimension. In fact, the existing CC and UIQI are mainly focused on the evaluation of the radiance distortion; however, the developed MCC and MUIQI can be more comprehensively evaluated on both radiance distortion and interrelationship preservation among the spectral bands. In addition, to avoid the subjective evaluation from spectral profiles by selecting only few specific pixels in existing studies [7], the horizontal profiles of the column means for each band were introduced to more comprehensively and objectively evaluate the fused results.

4.2. Experimental Results

The experiments were implemented on IKONOS, QuickBird, and GF-1 satellite images. Firstly, the IKONOS experiment is shown. Figure 3 shows the experimental results of the IKONOS satellite images from Huangshi City, Hubei Province, China. The proposed fusion result is shown in Figure 3g with the parameter u being 1.0 and v being 1.2, and the radius of the window size and the parameter ε of guided filter were empirically set to 2 and 0.01, respectively. Figure 3c,f show the fused results of the GS, PCA, AIHS, and the AWLP methods, respectively. It can be seen that the PCA fusion result generates obvious color distortion. In contrast, the spectral distortion of the GS fusion result is relatively smaller, indicating that the GS method is more stable than the PCA method for vegetation areas. Figure 4 shows that the profiles, especially bands 1–3, of GS and PCA fusion results are quite different to the profiles of the original image, which indicates the poor spectral information preservation of the two methods. For comparison, the AIHS and AWLP fusion results give relative better visual effects for spectral preservation; however, Figure 4 shows that some local sections of the AIHS and AWLP profiles have some degree of deviation from the original image. In contrast, the fusion result of the proposed method is the most similar to the reference image, and the spectral profiles of the proposed fusion result are also the closest to the reference image, which indicts the good spectral information preservation. Table 2 shows the quantitative evaluation results. This shows that only the CC and MCC values of the proposed method are 0.0001 and 0.0008 lower, respectively, than the best value; however, all the other indices of the proposed method are better than the other fusion methods. Therefore, it is demonstrated that the proposed method can obtain a higher spectral fidelity result with good spatial texture information.
The QuickBird experimental results are shown in Figure 5. The QuickBird PAN and MS images are located in Nanchang City, Jiangxi Province, China, and they were acquired on 8 October 2004. Figure 5g shows the proposed fusion result with the parameter u = 1.0 and v = 1.0. In addition, the radius of the window size and the parameter ε of guided filter were empirically set to 2 and 0.01, respectively. Figure 5c,f show the fused results of the GS, PCA, AIHS, and the AWLP methods, respectively. On the whole, all the methods can obtain good fused results. For comparison, the AIHS and the AWLP fusion results present slightly spatial distortions in this experiment. The proposed method can well suppress the spatial distortions, and it has better spatial visual effect and higher spectral fidelity. To evaluate the fusion result objectively, the horizontal profiles of the column means for each band are displayed in Figure 6. The black dotted line represents the original image, and the closer to the black dotted line, the better of the fused result. Figure 6 shows that there is a certain degree of deviation between the horizontal profiles of GS and PCA and the original image. The horizontal profiles of AIHS, AWLP, and the proposed method are closest to the original image, and the difference is small between them. To comprehensively compare the fusion methods, the quantitative indices are shown in Table 3. It shows that most of the evaluation indices for the proposed method are the best. The reason why some of the spectral indices from the PCA and GS methods are slightly better is that the two methods are relatively more stable for buildings and roads, which are the main features of the image. Overall, the proposed method, not only obtains a good spatial effect, but also has a higher spectral fidelity than other methods.
Figure 7 shows the experimental results of GF-1 satellite images from Nanyang City, Henan Province, China, acquired on 6 August 2013. The parameter u was set to 1.0 and v was set to 0.9, the radius of the window size and the parameter ε of guided filter were empirically set to 2 and 0.01, respectively. It shows that the experimental results are similar with the IKONOS experiment. As with the IKONOS experiment in Figure 3c,d, the GS and PCA methods show serious spectral distortion in this GF-1 experiment. Visually, the color of AIHS, AWLP, and the proposed fusion result is the closest to the reference image. Figure 8 shows that the profiles of AWLP, AIHS, and the proposed fusion results are also the closest to the reference image, and it is hard to distinguish between them. Hence, to more objectively evaluate the fusion results, the quantitative indices of the fusion results are displayed in Table 4. It is shown that the proposed method has relative slight better fusion performance than other methods.

4.3. Discussion

This paper proposed a pansharpening method using an edge-preserving guided filter based on the three-layer decomposition, and it is different from the existing pansharpening method with the edge-preserving filters, which decomposes the PAN image into the “low frequency” layer (actually, the “low frequency” includes both the low-frequency information and large-scale features, as shown in Figure 2) and a detail layer. In this paper, the PAN image is decomposed into three layers by considering the edge-preserving characteristics.
To verify the advantage of the proposed three-layer decomposition over the traditional two-layer decomposition, the statistical experimental results by using the three-layer and two-layer decomposition are shown. In this experiment, the IKONOS PAN (Figure 9a) and MS images (Figure 9b) are utilized, and statistical results of the CC, UIQI, RMSE, ERGAS, SAM, MCC, and MUIQI are shown in Figure 10. The blue curve denotes quantitative results of the traditional two-layer decomposition, and the red curve represents the statistical quantitative results by using the three-layer decomposition. Here, the abscissa denotes the different setting of parameter u with v being set to 1, indicating the different amount of injected edge layer. When the parameter u is 0, it denotes the result of two-layer decomposition.
It is shown that all the quantitative evaluation results can be improved with the increase of parameter u at first. This indicates that the proposed three-layer decomposition has better fusion results than the traditional two-layer decomposition as the injected edge layer within a certain degree. It is because that the traditional two-layer decomposition neglects the large-scale features, as clearly shown in Figure 2. On the whole, the three-layer decomposition has the advantage over the traditional two-layer decomposition.

5. Conclusions

This paper has presented a pansharpening method with an edge-preserving guided filter based on three-layer decomposition. In the proposed method, the PAN image is decomposed into three layers, i.e., the edge layer, the detail layer, and the low frequency layer, and then the edge layer and the detail layer are injected into the MS image by a proportional injection model. In addition, two new quantitative evaluation indices of MCC and MUIQI have been proposed. The proposed method is comprehensively verified by IKONOS, QuickBird, and GF-1 satellite images, and it is compared with several of the state-of-the-art pansharpening methods on both qualitative and quantitative aspects. The evaluation results confirm that the proposed three-layer decomposition for pansharpening, based on edge-preserving guided filter, is better than the traditional two-layer decomposition, and it can improve the spatial resolution while preserving the spectral fidelity.

Acknowledgments

The authors would like to thank the anonymous reviewers for their insightful comments and suggestions. This work was supported by the National Natural Science Foundation of China under Grants 41271376 and 41422108, Cross-disciplinary Collaborative Teams Program for Science, Technology and Innovation of the Chinese Academy of Sciences, Wuhan Science and Technology Program under Grant 2013072304010825.

Author Contributions

Xiangchao Meng, Huanfeng Shen conceived and designed the proposed methods and experiments; Xiangchao Meng performed the experiments and wrote the paper; Jie Li, Huanfeng Shen, Hongyan Zhang make valuable suggestions for the design of the proposed method and paper revision; Huanfeng Shen, Liangpei Zhang, and Hongyan Zhang direct the research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sirguey, P.; Mathieu, R.; Arnaud, Y.; Khan, M.M.; Chanussot, J. Improving modis spatial resolution for snow mapping using wavelet fusion and arsis concept. IEEE Geosci. Remote Sens. Lett. 2008, 5, 78–82. [Google Scholar] [CrossRef] [Green Version]
  2. Ulusoy, I.; Yuruk, H. New method for the fusion of complementary information from infrared and visual images for object detection. IET Image Proc. 2011, 5, 36–48. [Google Scholar] [CrossRef]
  3. Zhang, Y. Understanding image fusion. Photogramm. Eng. Remote Sens. 2004, 70, 657–661. [Google Scholar]
  4. Meng, X.; Shen, H.; Zhang, H.; Zhang, L.; Li, H. Maximum a posteriori fusion method based on gradient consistency constraint for multispectral/panchromatic remote sensing images. Spectrosc. Spectr. Anal. 2014, 34, 1332–1337. [Google Scholar]
  5. Meng, X.; Shen, H.; Zhang, L.; Yuan, Q.; Li, H. A unified framework for spatio-temporal-spectral fusion of remote sensing images. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015; pp. 2584–2587.
  6. Ballester, C.; Caselles, V.; Igual, L.; Verdera, J.; Rougé, B. A variational model for p+ xs image fusion. Int. J. Comput. Vis. 2006, 69, 43–58. [Google Scholar] [CrossRef]
  7. Zhang, L.; Shen, H.; Gong, W.; Zhang, H. Adjustable model-based fusion method for multispectral and panchromatic images. IEEE Trans. Syst. Man Cybern. Part B Cybern. 2012, 42, 1693–1704. [Google Scholar] [CrossRef] [PubMed]
  8. Gillespie, A.R.; Kahle, A.B.; Walker, R.E. Color enhancement of highly correlated images. Ii. Channel ratio and “chromaticity” transformation techniques. Remote Sens. Environ. 1987, 22, 343–365. [Google Scholar] [CrossRef]
  9. Zhang, Y. System and Method for Image Fusion. Patents US20040141659 A1, 22 July 2004. [Google Scholar]
  10. Tu, T.-M.; Su, S.-C.; Shyu, H.-C.; Huang, P.S. A new look at ihs-like image fusion methods. Inf. Fusion 2001, 2, 177–186. [Google Scholar] [CrossRef]
  11. Tu, T.-M.; Huang, P.S.; Hung, C.-L.; Chang, C.-P. A fast intensity-hue-saturation fusion technique with spectral adjustment for ikonos imagery. IEEE Geosci. Remote Sens. Lett. 2004, 1, 309–312. [Google Scholar] [CrossRef]
  12. Chien, C.-L.; Tsai, W.-H. Image fusion with no gamut problem by improved nonlinear ihs transforms for remote sensing. IEEE Trans. Geosci. Remote Sens. 2014, 52, 651–663. [Google Scholar] [CrossRef]
  13. Brower, B.V.; Laben, C.A. Process for Enhancing the Spatial Resolution of Multispectral Imagery Using Pan-Sharpening. Patents US6011875 A, 4 January 2000. [Google Scholar]
  14. Shettigara, V. A generalized component substitution technique for spatial enhancement of multispectral images using a higher resolution data set. Photogramm. Eng. Remote Sens. 1992, 58, 561–567. [Google Scholar]
  15. Aiazzi, B.; Alparone, L.; Baronti, S.; Garzelli, A. Context-driven fusion of high spatial and spectral resolution images based on oversampled multiresolution analysis. IEEE Trans. Geosci. Remote Sens. 2002, 40, 2300–2312. [Google Scholar] [CrossRef]
  16. Da Cunha, A.L.; Zhou, J.; Do, M.N. The nonsubsampled contourlet transform: Theory, design, and applications. IEEE Trans. Image Proc. 2006, 15, 3089–3101. [Google Scholar] [CrossRef]
  17. Choi, M.; Kim, R.Y.; Nam, M.-R.; Kim, H.O. Fusion of multispectral and panchromatic satellite images using the curvelet transform. IEEE Geosci. Remote Sens. Lett. 2005, 2, 136–140. [Google Scholar] [CrossRef]
  18. Otazu, X.; González-Audícana, M.; Fors, O.; Núñez, J. Introduction of sensor spectral response into image fusion methods. Application to wavelet-based methods. IEEE Trans. Geosci. Remote Sens. 2005, 43, 2376–2385. [Google Scholar] [CrossRef] [Green Version]
  19. Burt, P.J.; Adelson, E.H. The laplacian pyramid as a compact image code. IEEE Trans. Commun. 1983, 31, 532–540. [Google Scholar] [CrossRef]
  20. Wang, W.; Chang, F. A multi-focus image fusion method based on laplacian pyramid. J. Comput. 2011, 6, 2559–2566. [Google Scholar] [CrossRef]
  21. Jiang, Y.; Wang, M. Image fusion using multiscale edge-preserving decomposition based on weighted least squares filter. IET Image Proc. 2014, 8, 183–190. [Google Scholar] [CrossRef]
  22. Fattal, R.; Agrawala, M.; Rusinkiewicz, S. Multiscale shape and detail enhancement from multi-light image collections. ACM Trans. Graph. 2007, 26, 51. [Google Scholar] [CrossRef]
  23. Hu, J.; Li, S. The multiscale directional bilateral filter and its application to multisensor image fusion. Inf. Fusion 2012, 13, 196–206. [Google Scholar] [CrossRef]
  24. Bennett, E.P.; Mason, J.L.; McMillan, L. Multispectral bilateral video fusion. IEEE Trans. Image Proc. 2007, 16, 1185–1194. [Google Scholar] [CrossRef]
  25. Li, S.; Kang, X.; Hu, J. Image fusion with guided filtering. IEEE Trans. Image Proc. 2013, 22, 2864–2875. [Google Scholar]
  26. Joshi, S.; Upla, K.P.; Joshi, M.V. Multi-resolution image fusion using multistage guided filter. In Proceedings of the Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG), Jodhpur, India, 18–21 December 2013; pp. 1–4.
  27. Padwick, C.; Deskevich, M.; Pacifici, F.; Smallwood, S. Worldview-2 pan-sharpening. In Proceedings of the ASPRS 2010 Annual Conference, San Diego, CA, USA, 26–30 April 2010.
  28. Kim, Y.; Lee, C.; Han, D.; Kim, Y.; Kim, Y. Improved additive-wavelet image fusion. IEEE Geosci. Remote Sens. Lett. 2011, 8, 263–267. [Google Scholar] [CrossRef]
  29. Zhang, D.-M.; Zhang, X.-D. Pansharpening through proportional detail injection based on generalized relative spectral response. IEEE Geosci. Remote Sens. Lett. 2011, 8, 978–982. [Google Scholar] [CrossRef]
  30. Meng, X.; Shen, H.; Li, H.; Yuan, Q.; Zhang, H.; Zhang, L. Improving the spatial resolution of hyperspectral image using panchromatic and multispectral images: An integrated method. In Proceedings of the 7th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Tokyo, Japan, 2–5 June 2015.
  31. Bro, R.; De Jong, S. A fast non-negativity-constrained least squares algorithm. J. Chemom. 1997, 11, 393–401. [Google Scholar] [CrossRef]
  32. Wald, L.; Ranchin, T.; Mangolini, M. Fusion of satellite images of different spatial resolutions: Assessing the quality of resulting images. Photogramm. Eng. Remote Sens. 1997, 63, 691–699. [Google Scholar]
  33. Rahmani, S.; Strait, M.; Merkurjev, D.; Moeller, M.; Wittman, T. An adaptive ihs pan-sharpening method. IEEE Geosci. Remote Sens. Lett. 2010, 7, 746–750. [Google Scholar] [CrossRef]
  34. He, X.; Condat, L.; Bioucas-Diaz, J.; Chanussot, J.; Xia, J. A new pansharpening method based on spatial and spectral sparsity priors. IEEE Trans. Image Proc. 2014, 23, 4160–4174. [Google Scholar] [CrossRef] [PubMed]
  35. Wang, Z.; Bovik, A.C. A universal image quality index. IEEE Signal Proc. Lett. 2002, 9, 81–84. [Google Scholar] [CrossRef]
  36. Wald, L. Quality of high resolution synthesised images: Is there a simple criterion? In Proceedings of the Third Conference on Fusion of Earth Data: Merging Point Measurements, Raster Maps and Remotely Sensed Images, Sophia Antipolis, France, 26–28 January 2000; pp. 99–103.
Figure 1. Schematic diagram of the proposed method.
Figure 1. Schematic diagram of the proposed method.
Sensors 16 01068 g001
Figure 2. Schematic diagram of the three-layer decomposition.
Figure 2. Schematic diagram of the three-layer decomposition.
Sensors 16 01068 g002
Figure 3. Fusion results of IKONOS experiment. (a) PAN image; (b) MS image; (c) GS fusion result; (d) PCA fusion result; (e) AIHS fusion result; (f) AWLP fusion result; (g) proposed fusion result; (h) original MS image.
Figure 3. Fusion results of IKONOS experiment. (a) PAN image; (b) MS image; (c) GS fusion result; (d) PCA fusion result; (e) AIHS fusion result; (f) AWLP fusion result; (g) proposed fusion result; (h) original MS image.
Sensors 16 01068 g003
Figure 4. Horizontal profiles of the column means for the IKONOS fusion results. (a) band 1; (b) band 2; (c) band 3; (d) band 4.
Figure 4. Horizontal profiles of the column means for the IKONOS fusion results. (a) band 1; (b) band 2; (c) band 3; (d) band 4.
Sensors 16 01068 g004
Figure 5. Fusion results of the QuickBird experiment. (a) PAN image; (b) MS image; (c) GS fusion result; (d) PCA fusion result; (e) AIHS fusion result; (f) AWLP fusion result; (g) proposed fusion result; (h) original MS image.
Figure 5. Fusion results of the QuickBird experiment. (a) PAN image; (b) MS image; (c) GS fusion result; (d) PCA fusion result; (e) AIHS fusion result; (f) AWLP fusion result; (g) proposed fusion result; (h) original MS image.
Sensors 16 01068 g005
Figure 6. Horizontal profiles of the column means for the QuickBird fusion results. (a) band 1; (b) band 2; (c) band 3; (d) band 4.
Figure 6. Horizontal profiles of the column means for the QuickBird fusion results. (a) band 1; (b) band 2; (c) band 3; (d) band 4.
Sensors 16 01068 g006
Figure 7. Fusion results of the GF-1 experiment. (a) PAN image; (b) MS image; (c) GS fusion result; (d) PCA fusion result; (e) AIHS fusion result; (f) AWLP fusion result; (g) proposed fusion result; (h) original MS image.
Figure 7. Fusion results of the GF-1 experiment. (a) PAN image; (b) MS image; (c) GS fusion result; (d) PCA fusion result; (e) AIHS fusion result; (f) AWLP fusion result; (g) proposed fusion result; (h) original MS image.
Sensors 16 01068 g007
Figure 8. Horizontal profiles of the column means for the GF-1 fusion results by the different fusion methods. (a) band 1; (b) band 2; (c) band 3; (d) band 4.
Figure 8. Horizontal profiles of the column means for the GF-1 fusion results by the different fusion methods. (a) band 1; (b) band 2; (c) band 3; (d) band 4.
Sensors 16 01068 g008
Figure 9. Experimental datasets in the validation of proposed pansharpening method based on three-layer decomposition over the traditional two-layer decomposition. (a) IKONOS PAN image; (b) IKONOS MS image with bicubic resampling.
Figure 9. Experimental datasets in the validation of proposed pansharpening method based on three-layer decomposition over the traditional two-layer decomposition. (a) IKONOS PAN image; (b) IKONOS MS image with bicubic resampling.
Sensors 16 01068 g009
Figure 10. The statistical results for the comparison of the proposed three-layer decomposition to the two-layer decomposition. (a) Results of CC; (b) results of UIQI; (c) results of RMSE; (d) results of ERGAS; (e) results of SAM; (f) results of MCC; (g) results of MUIQI.
Figure 10. The statistical results for the comparison of the proposed three-layer decomposition to the two-layer decomposition. (a) Results of CC; (b) results of UIQI; (c) results of RMSE; (d) results of ERGAS; (e) results of SAM; (f) results of MCC; (g) results of MUIQI.
Sensors 16 01068 g010
Table 1. Quantitative evaluation indices.
Table 1. Quantitative evaluation indices.
Evaluation IndicesDefinitionsMeaning
CC [18,32] C C = 1 B b = 1 B σ F b , R b σ F b σ R b the bigger the better
UIQI [35] U I Q I = 1 B b = 1 B 4 σ F b R b m F b m R b ( σ F b 2 + σ R b 2 ) ( m F b 2 + m R b 2 ) the bigger the better
RMSE [18] R M S E = 1 B b = 1 B | | F b R b | | F 2 N 1 N 2 the smaller the better
ERGAS [18,34,36] E R G A S = 100 h l 1 B b = 1 B R M S E b 2 m R b 2 the smaller the better
SAM [34] S A M = 1 N 1 N 2 i = 1 N 1 N 2 cos 1 b = 1 B ( F i , b R i , b ) b = 1 B F i , b 2 b = 1 B R i , b 2 the smaller the better
Proposed MCC M C C = 1 N 1 N 2 i = 1 N 1 N 2 σ V ( F i , b = 1 ... B ) , ( R i , b = 1 ... B ) σ V ( F i , b = 1 ... B ) σ V ( R i , b = 1 ... B ) the bigger the better
Proposed MUIQI M U I Q I = 1 N 1 N 2 i = 1 N 1 N 2 4 σ V ( F i , b = 1 ... B ) , ( R i , b = 1 ... B ) m V ( F i , b = 1 ... B ) m V ( R i , b = 1 ... B ) ( σ V ( F i , b = 1 ... B ) 2 + σ V ( R i , b = 1 ... B ) 2 ) ( m V ( F i , b = 1 ... B ) 2 + m V ( R i , b = 1 ... B ) 2 ) the bigger the better
Table 2. Quantitative evaluation results of the IKONOS experiment (the best result is marked in bold, and the second best result is underlined).
Table 2. Quantitative evaluation results of the IKONOS experiment (the best result is marked in bold, and the second best result is underlined).
Quality IndicesIdeal ValueFusion Methods
GSPCAAIHSAWLPProposed
CC10.93700.81110.95090.94510.9508
RMSE057.876286.299350.233453.658947.6828
UIQI10.91290.79820.93810.94350.9500
ERGAS02.79244.29492.41452.55172.2823
SAM03.90726.00033.61103.56313.4877
MCC10.92260.85460.92990.93230.9315
MUIQI10.88690.80730.89580.89600.8975
Table 3. Quantitative evaluation results of the QuickBird experiment (the best result is marked in bold, and the second best result is underlined).
Table 3. Quantitative evaluation results of the QuickBird experiment (the best result is marked in bold, and the second best result is underlined).
Quality IndicesIdeal ValueFusion Methods
GSPCAAIHSAWLPProposed
CC10.97340.97390.96490.96910.9726
RMSE09.44549.021710.59019.47988.5592
UIQI10.96650.97150.96090.96880.9723
ERGAS00.58420.56490.66080.58110.5163
SAM00.72400.77660.75240.70040.6851
MCC10.99640.99620.99600.99650.9962
MUIQI10.99580.99560.99540.99540.9958
Table 4. Quantitative evaluation results of the GF-1 experiment (the best result is marked in bold, and the second-best result is underlined).
Table 4. Quantitative evaluation results of the GF-1 experiment (the best result is marked in bold, and the second-best result is underlined).
Quality IndicesIdeal ValueFusion Methods
GSPCAAIHSAWLPProposed
CC10.60720.40190.93080.92260.9326
RMSE063.75680.390828.367729.73127.7250
UIQI10.59590.39740.92580.92210.9317
ERGAS04.97066.09912.13092.19182.0526
SAM06.98359.46282.44872.47772.4206
MCC10.79700.74540.93360.93470.9349
MUIQI10.71590.63420.90360.90420.9054

Share and Cite

MDPI and ACS Style

Meng, X.; Li, J.; Shen, H.; Zhang, L.; Zhang, H. Pansharpening with a Guided Filter Based on Three-Layer Decomposition. Sensors 2016, 16, 1068. https://doi.org/10.3390/s16071068

AMA Style

Meng X, Li J, Shen H, Zhang L, Zhang H. Pansharpening with a Guided Filter Based on Three-Layer Decomposition. Sensors. 2016; 16(7):1068. https://doi.org/10.3390/s16071068

Chicago/Turabian Style

Meng, Xiangchao, Jie Li, Huanfeng Shen, Liangpei Zhang, and Hongyan Zhang. 2016. "Pansharpening with a Guided Filter Based on Three-Layer Decomposition" Sensors 16, no. 7: 1068. https://doi.org/10.3390/s16071068

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop