 Advances in Technology Innovation, vol. 5, no. 3, 2020, pp. 166-181 Evaluation of Pan-Sharpening Techniques Using Lagrange Optimization Mutum Bidyarani Devi 1,* , Rajagopalan Devanathan 2 1 Department of Electronics and Communication Engineering, 2 Department of Electrical and Electronics Engineering, Hindustan Institute of Technology and Science, Chennai, India Received 19 May 2019; received in revised form 07 December 2019; accepted 04 March 2020 DOI: https://doi.org/10.46604/aiti.2020.4288 Abstract Earth’s observation satellites, such as IKONOS, provide simultaneously multispectral and panchromatic images. A multispectral image comes with a lower spatial and higher spectral resolution in contrast to a panchromatic image which usually has a high spatial and a low spectral resolution. Pan-sharpening represents a fusion of these two complementary images to provide an output image that has both spatial and spectral high resolutions. The objective of this paper is to propose a new method of pan-sharpening based on pixel-level image manipulation and to compare it with several state-of-art pansharpening methods using different evaluation criteria. The paper presents an image fusion method based on pixel-level optimization using the Lagrange multiplier. Two cases are discussed: (a) the maximization of spectral consistency and (b) the minimization of the variance difference between the original data and the computed data. The paper compares the results of the proposed method with several state-of-the-art pan-sharpening methods. The performance of the pan-sharpening methods is evaluated qualitatively and quantitatively using evaluation criteria, such as the Chi-square test, RMSE, SNR, SD, ERGAS, and RASE. Overall, the proposed method is shown to outperform all the existing methods. Keywords: image fusion, pan-sharpening, optimization, spectral consistency 1. Introduction With the fast-growing number of earth’s observation satellites, satellite data providing information about the surface of the earth is increasing. This information has been applied in different fields for various end-users’ applications, such as urban planning, agriculture, forestry, and mining. Depending on sensors onboard the satellite, different types of images of the earth’s surface received. Some sensors provide single-channel data while other sensors provide both single and multichannel data. Single-channel data or monochrome images, such as the panchromatic (pan) image, usually comes with a high spatial resolution associated with a low spectral resolution. On the other hand, the multispectral (XS) image comes with a low spatial resolution and a high spectral resolution. For example, the commercial-launched satellite sensor, such as IKONOS, provides 1 m pan and 4 m XS (R, G, B, and near-infrared bands) images. The analysis and the usage of the remotely sensed data are user-dependent. Sometimes the processing of high-quality images is needed for certain applications such as classification and target detection. Since satellite data are available in different resolutions spatially and spectrally as well as at different scales and times, data fusion has been applied successfully to obtain both high resolution spatial and spectral image. Data fusion can be defined as the process of combining two or more incoming signals which complement each other and produce an output signal which has more information content than the incoming inputs. More formally, data fusion is a formal framework that expresses means and tools for the alliance of data of the same scene originating from different sources. It aims * Corresponding author. E-mail address: bidyarani.mutum@gmail.com Advances in Technology Innovation, vol. 5, no. 3, 2020, pp. 166-181 167 at obtaining information of greater quality; the exact definition of greater quality will depend on the application” [1]. Image fusion plays an important role in the remote sensing field where satellite images come with complementary resolutions that are being made available in the public domain. Image fusion, in particular, is defined as the “combination of two or more different images (of the same scene) to form a new image by using a certain algorithm” [2]. One important application of image fusion has been increasing the resolution of an XS imagery by using higher-resolution pan data. The output consists of an XS image whose resolution is both spatially and spectrally high. Such an image fusion process usually termed as pan-sharpening. Pan-sharpening involves the integration of a panchromatic band and a multispectral band of different resolutions. Generally, a panchromatic band comes with a high spatial resolution and a low spectral resolution, while a multispectral band comes with a lower spatial resolution and a high spectral resolution. Sometimes, the integration of these images is required when a very high-quality image is needed. A panchromatic image provides details of the scene observed while lacking in color properties. Nevertheless, a multispectral image can be useful in detecting objects with a similar spectral signature but belonging to different classes. In the pan-sharpening process, the properties of these complementary images are combined to get an output image that has a high resolution spatially and spectrally. However, one primary challenge in pan-sharpening is to preserve the spectral properties of the multispectral band in the output image. The assessment of the fused image can be performed using various evaluation parameters. The most commonly used fusion methods include intensity-hue-saturation (HIS) transform, Brovey transform, principal component analysis (PCA), smoothing filter-based intensity modulation (SFIM), high pass filter (HPF), and multiplicative transform. In the HIS [3-6] pan-sharpening method, the RGB color space is transformed into IHS color space. The transformation can be performed by using three bands only at a time. The step consists firstly of resampling the XS images to the same spatial resolution of pan. Secondly, the transformation of resampled RGB to IHS color space is carried out. Thirdly, the intensity component is a histogram matched with panchromatic band data. Fourthly, the intensity component is replaced by the histogram matched with panchromatic band data. Finally, the reverse transformation is applied to get the new R, G, and B fused images. The Brovey method [7], on the other hand, is a combination of pan and XS images. The method involves multiplication and division operations. The Brovey method is limited to only three bands of the XS channels. Each XS band is divided by the sum of all the three bands and multiplied by pan. Smoothing filter-based intensity modulation (SFIM) is a smoothing algorithm [8] where a low pass filter is applied to a high-resolution pan channel. The low resolution multispectral is multiplied with the high-resolution pan band, which is divided by the low pass filtered pan band. This is done for every band of the multispectral channel. The high pass filter (HPF) [8] method involves high pass filtering of pan band with a window size of a 3×3 filter. In this case, the multispectral band is multiplied with the high pass filtered pan band divided by 2. Principal component analysis (PCA) [9, 10] is another commonly used method applied to image fusion. First, the principal components (PC) are computed for each multispectral band, then the first PC is replaced by the pan band, and an inverse transformation is applied to get the fused output. Lastly, multiplication transform [8] is another popular algorithm used for image fusion. This method is very simple to implement. Each of the low-resolution multispectral bands is multiplied with the high-resolution pan band with a corresponding weight. A square root is taken on the final output to avoid excessive brightness values. Earlier work of the authors [11] proposed a solution to the problem of pan-sharpening, which aims in maintaining the spectral consistency of the XS channels in the fused image. The primary objective of the current work is to compare the proposed pan-sharpening technique [11] to the existing fusion methods of HIS, Brovey, and other standard methods (namely, SFIM, HPF, PCA, and multiplication methods) based on several evaluation criteria such as Chi-square test, RMSE, SNR, SD, ERGAS, and RASE. The main contribution of the paper stated as (a) restatement of the authors’ earlier proposal for pan-sharpening based on linear regression and Lagrange optimization, (b) application of the proposed algorithm of pan-sharpening to IKONOS data, (c) validation of the results of the proposed pan-sharpening method, (d) complexity analysis of the proposed pan-sharpening algorithm, and, (e) finally, a comparison of the performance of the proposed pan-sharpening Advances in Technology Innovation, vol. 5, no. 3, 2020, pp. 166-181 168 method to the other fusion methods viz. HIS, Brovey, SFIM, HPF, PCA, and multiplication methods based on several evaluation indices, namely, Chi-square test, RMSE, SNR, SD, ERGAS, and RASE. The organization of the rest of the paper is as follows: Section 2 gives an overview of the existing work related to the pan-sharpening comprising of state-of-the-art methods including model-based methods. Section 3 deals with a brief description of the proposed fusion method formulated as an optimization problem with an objective function and the Lagrange multiplier-based solution. The proposed method is applied to IKONOS data as described in Section 4. Simulation results of the proposed method and comparison with other fusion methods based on several evaluation criteria are discussed in Section 5. Finally, Section 6 presents the main discussions of the experimental results followed by the conclusion. 2. Related Work The state-of-art methods include the intensity-hue-saturation transformation (IHS) [3-6], the Brovey method [7], and principal component analysis (PCA) [12]. Many researchers have reported that though these conventional methods produce a spatially high-resolution fused image, images are spectrally distorted. Hong [13] proposed a method that involves the integration of “IHS and wavelet” to preserve the spatial details. Zhang and Hong [14] also proposed a method of integrating IHS and wavelet to solve the spectral distortion problem. A method combining PCA and nonsubsampled contourlet transform was reported by [15] to overcome the drawback of spectral distortion. Andreja [16] proposed a method integrating IHS and Brovey methods with a multiplicative (MULTI) method for maintaining the spatial and spectral details. Yang [17] proposed an IHS based on pan-sharpening technique using ripplet transform and compressed sensing for improved spectral properties. The pan-sharpening method based on Bayesian theory was proposed by [18]. A comparison between different popular image fusion techniques was reported in the works of [19-20]. Performance evaluation of fusion algorithms can also be seen from [20-22]. An assumption that the downsampled fused image should be similar to the original XS image was proposed by [23]. Meng [24] proposed a pan-sharpening technique which uses an edge-preserving guided filter based on a three-layer decomposition. To strengthen pan-sharpening methods, [25] proposed a method involving the prior modification of the panchromatic image. This method preserves simultaneously spatial and spectral quality. There also exist a few variational models reported by researchers. Ballester [26] proposed the first variational model called P+XS. Fang [27] also proposed a model to fuse pan and XS based on certain assumptions. Moller [28] proposed a model called variational wavelet for pan-sharpening (VWP). Deng [29] also proposed a variational model based on kernel Hilbert space and Heaviside function. Super resolution-based pan-sharpening can be found in the works of [30-32]. The fusion of satellite images has been done on data obtained from various sensors. It can be done on the data coming from the same sensor or different sensors onboard. Examples of single-source sensors include IKONOS, Quickbird, Landsat, Spot, etc. IKONOS data fusion has been reported in the works of [6, 23, 33-34]. Fusion using SPOT data has been reported in the works of [3, 35]. Several works have been reported integrating IKONOS with Landsat data as well as Spot data with Landsat data [4-5]. A review of different pan-sharpening techniques can be seen in the works of [2, 36-39]. 3. Proposed Fusion Method 3.1. Linear regression The first step in our fusion method was to build a linear regression model based on the assumption that a strong correlation exists between the panchromatic and the multispectral bands. The regression model can be defined as follows aR bG cB P   (1) Advances in Technology Innovation, vol. 5, no. 3, 2020, pp. 166-181 169 where R, G, and B represent the deviations from the respective sample mean of Red, Green, and Blue spectral band intensity data. a, b, and c are the regression coefficients. Eq. (1) can also be written as Xu P (2) where X = [R G B] is an n × 3 matrix containing columns of n samples of red, green, and blue color data. Besides, u= [a b c] T represents the vector of regression coefficients determined. The superscript T stands for matrix transpose. The regression coefficients can be calculated as   1 T T u X X X P   (3) It is assumed that ( X T X ) -1 exists. 3.2. Lagrange optimization Using (1) as a constraint, an objective function for minimization is formulated to achieve spectral consistency and variance matching of the XS bands as follows.       2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 1 1                                                                                       r jk r jk jk r g jk j j j j g jk jk g b jk j j j b jk jk b jk jk j j j R R R G N N N N G G B N N N B B aR N n N i bG m                               k jk jk j cB P (4) where Rjk, Gjk, and Bjk are deviations from the respective sample mean for the red, green, and blue band, respectively at pixel location (j, k) of the n× n image. Besides, r  , g  , and b  are the parameters for red (R), green (G), and blue (B) band, respectively forming a convex combination of spectral consistency and variance matching. 2 r  , 2 g  and 2 b  are the variances of the original red (R), green (G), and blue (B) band data, respectively, and jk  is the Lagrange multiplier to enforce the constraint (1). The solution to the above-mentioned minimization problem had been reported in an earlier work of the authors [11] and is briefly described in the following section. Two independent cases can be derived from Eq. (4). Case 1: with 1, , , j j r g b   , for red, green, and blue respectively. Case 2: when 0, , , j j r g b   , for red, green, and blue respectively. The first case deals with minimizing the spectral inconsistency between the actual multispectral data and the computed data. The second case compares the variances between the actual multispectral data and the computed data assuming a gaussian distribution of the intensity data. Proposition 1: 1, , , j j r g b   for red, green, and blue, respectively. Eq. (4) can be written as   2 2 2 2 2 2 2 2 2 1 1 1                                jk jk jk jk jk jk jk jk j j j j R G B aR bG cB N N m P N in (5) Without loss of generality, the three bands, namely red, green, and blue can be considered independently. Differentiating Eq. (5) concerning Rjk, the solution to Eq. (4) is given as Advances in Technology Innovation, vol. 5, no. 3, 2020, pp. 166-181 170 2 2 2 ,    jk jk fo a r the red P R a b c band (6) The pan-sharpened output for the red band is obtained as (1) ,   new jk jk mean R R R (7) where (1) , new jk R is the pan-sharpened high-resolution output at the location (j,k), and Rmean is the sample mean of the red band. Superscript (1) stands for Case 1. Similarly, if we differentiate (4) concerning Gjk and Bjk respectively, the solution to Eq. (4) is given as 2 2 2 ,    jk jk for the green bP G a b c band (8) The pan-sharpened output for the green band is obtained as (1) ,   new jk jk mean G G G (9) where (1) , new jk G is the pan-sharpened high-resolution output at the location (j, k), Gmean is the sample mean of the green band, and 2 2 2 ,    jk jk for the blue cP B a b c band (10) The pan-sharpened output for the blue band is obtained as (1) ,   new jk jk mean B B B (11) where (1) , new jk B is the pan-sharpened high-resolution output at the location (j,k), and Bmean is the sample mean of the blue band. Proposition 2: 0, , , j j r g b   for red, green, and blue band, respectively. Eq. (4) can be written as   2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1                                                                        jk jk r jk jk g j j j j jk jk b jk jk jk jk jk j j j R R G G N N N N B B aR bG cB P N in N m (12) Without loss of generality, the three bands, namely red, green, and blue can be considered independently. Differentiating Eq. (9) concerning Rjk, Gjk, and Bjk respectively, the solution to Eq. (4) is given as 2 2 2 ,    jk jk r r g b aP R e e a b c e for the red e band (13) 2 2 2 ,    jk jk g g r b bP G e e a b c e e for the green band (14) and 2 2 2 ,    jk jk b b r g for the b cP luB e e a b c e e e band (15) Advances in Technology Innovation, vol. 5, no. 3, 2020, pp. 166-181 171 Respectively, the pan-sharpened output for the red, green, and blue band is respectively obtained as ( 2) , ( 2) , ( 2) ,       new jk jk mean new jk jk mean new jk jk mean R R R G G G B B B (16) where (2) (2) (2) , , , , , new jk new jk new jk R G B is the pan-sharpened high-resolution output and Rmean, Gmean, Bmean are the respective sample means of the red, green, and blue band respectively. Superscript (2) stands for Case 2. Remark 1: er, eg, and eb are the errors associated with the multispectral red, green, and blue band which represents the variance difference between the actual and the computed data. Eqs. (13)-(15) are circuitous involving error ratios that are dependent on the solution of Rjk, Gjk, and Bjk, respectively. For example, the ratios /g re e and /g be e in Eq. (14) for Gjk depend on Rjk, Gjk, and Bjk. Similarly, for the error ratios involved in Eqs. (13) and (15). The circuitous relation is resolved [11] by showing that the error ratios, in the optimal case, can be considered as the ratio of the variances of the respective spectral band. For example, the error ratios in Eq. (14) is in the form 2 2 / /  g r g r e e and 2 2 / /  g b g b e e . Similar results apply for the error ratios in Eqs. (13) and (15). The solution of Case 1 and Case 2 as given in Eqs. (6), (8) and (10) as well as Eqs. (13)-(15), respectively represent high-resolution deviation (from the respective mean) in the multispectral band for each channel. Remark 2: The fusion process can be performed simultaneously for all the three bands considered. (1) (1) (1) , , , , , new jk new jk new jk R G B (when added to respective mean) represent the projected high-resolution multispectral band based on Case 1 while (2) (2) (2) , , , , , new jk new jk new jk R G B (when added to respective mean) represent the high-resolution multispectral band based on Case 2. Remark 3: Since the Eqs. (6)-(11) and (13)-(16) are in closed form, the computation of (1) (1) (1) , , , , , new jk new jk new jk R G B and (2) (2) (2) , , , , , new jk new jk new jk R G B for each location (j, k) is done in a single operation. Hence, the complexity of pan-sharpening computation is O(n) where n is the number of data points. 3.3. Proposed fusion method Fig. 1 Flowchart of the proposed fusion method Advances in Technology Innovation, vol. 5, no. 3, 2020, pp. 166-181 172 Fig. 1 shows the methodology of the proposed fusion method indicating the sequence of steps involved as 1, 2, etc as shown. First, a panchromatic band and multispectral band images were taken. Second, down-sampling the panchromatic data to the lower resolution of multispectral data since the resolution differs in the ratio of 4:1. For example, down-sampling of 8×8 pixels pan data gives 2×2 pixels points. Thirdly, the deviation from the respective sample means was calculated for both the pan and XS data. Fourthly, the regression coefficients were calculated using Eq. (3), and the fifth step consists of applying the proposed two cases of fusion methods to the dataset with the respective means of the band added to the deviations (Eq s. (6)-(16)). Finally, the projected high-resolution multispectral data was down-sampled for comparison with the original low-resolution multispectral data. 4. Evaluation Criteria The proposed method is applied to IKONOS data, the results of which are presented in the following section. The results of the proposed method of pan-sharpening are also compared with that of other state-of-the-art methods on the same data in that section. To compare the various results, using a common set of evaluation criteria which are described in this section. 4.1. Chi-square test The chi-square test [40] is computed as   2 2 i i i i O E E     (17) where i E and i O are the expected and the observed data points, respectively, and i=1, 2…n. A small p-value indicates a good fit. The lower the p-value is, the more significant the result is. A low p-value indicates the probability of getting a better fit than the one evaluated is low. 4.2. Root mean square error (RMSE) Root mean square error (RMSE) [41] is defined as 2 1 ˆ( )     n i i i x x RMSE n (18) where i x is the original value, while ˆ i x is the computed value. A lower RMSE value indicates good spatial and spectral properties. 4.3. Signal to noise ratio (SNR) The signal to noise ratio (SNR) [42] can be calculated as follows 2 1 2 1 ˆ( ) ˆ( )       n i i n i i i x SNR x x (19) where i x and ˆ i x are the original and computed data, respectively. A higher SNR value indicates a good result. 4.4. Spectral discrepancy (SD) Spectral discrepancy (SD) [41] is usually done to check the spectral quality of the fusion result and is computed as follows Advances in Technology Innovation, vol. 5, no. 3, 2020, pp. 166-181 173 1 1 ˆ      N i i i SD x x N (20) A lower SD value indicates a good spectral quality of the fusion result. 4.5. Erreur relative globale adimensionelle de synthese (ERGAS) “Erreur Relative Globale Adimensionelle de Synthese (ERGAS)”, an error-index proposed by [43] which specifies the global picture quality of the fusion output and is given as 2 1 1 ( ) 100 ( ) Q q h RMSE q ERGAS l Q q         (21) where (h/l) is the ratio of the pixel sizes between the pan and XS; ( )q and q represents the mean of the qth channel and the index of the band respectively. As proposed by [43], a lower value of ERGAS indicates a good quality or better fusion output, or the results are considered to be of low quality. 4.6. Relative average spectral error (RASE) RASE index is represented as a percentage to predict the spectral quality of the fusion output [44]. 2 1 100 1 ( ) K i i RASE RMSE k M K    (22) where M is the average pixel value in the spectral band considered, K indicates the number of bands, and RMSE is the root mean square error of the k th channel. Like the ERGAS, the result can be interpreted similarly. A lower value of the RASE index indicates a good spectral quality of the fusion output. 5. Experimental Results and Analysis IKONOS satellite image was used for the fusion process. IKONOS satellite image has been used by many researchers for various applications. In this work, an illustration of the proposed methods and other fusion methods were performed on a real IKONOS image consisting of a 32×32 pixels panchromatic band and corresponding 8 ×8 pixels multispectral band (shown in Figs. 2 and 3, respectively). The dataset consists of smaller areas selected from a larger IKONOS satellite image that is freely available for use. The IKONOS satellite image provides two types of images (a) a panchromatic image (1 m spatial resolution) and (b) a multispectral image (4 m spatial resolution) comprising of 4 bands namely, red, green, blue, and NIR. In this study, only the first three bands were considered. Fig. 2 32×32 panchromatic band Fig. 4 shows the fusion result based on the proposed method and Fig. 5(a) through (f) shows the fusion results based o n HIS, Brovey, SFIM, HPF, PCA, and multiplicative methods, respectively. In this work, the fusion result was obtained only for Advances in Technology Innovation, vol. 5, no. 3, 2020, pp. 166-181 174 the three bands, namely, red, green, and blue of the multispectral channels. All the figures are presented in grayscale since each band can be represented as a single color. Therefore, the grayscale image will be equally effective. Fig. 3 8 ×8 multispectral band (a) Case 1 (b) Case 2 Fig. 4 Fusion result based on proposed fusion method Advances in Technology Innovation, vol. 5, no. 3, 2020, pp. 166-181 175 For the assessment of the spectral and spatial quality of the fusion result, the projected high-resolution XS images are down-sampled so that they are in the same dimension as the original XS. Spectral consistency means that the down-sampled data from the high-resolution XS should be close to the original XS data [45]. (a) IHS method (b) Brovey method (c) SFIM (d) High pass filter Fig. 5 Fusion results Advances in Technology Innovation, vol. 5, no. 3, 2020, pp. 166-181 176 (e) PCA (f) Multiplicative methods respectively Fig. 5 Fusion results (continued) Table 1 Performance evaluation of the proposed fusion method (Case 1 and Case 2) Evaluation criteria Multispectral Band Case 1 Case 2 Chi square test (p value) Red 0.01 0.0002 Green 1.4×10 -6 1.4×10 -6 Blue 3.9×10 -4 0.48 RMSE Red 4.75 4.43 Green 4.21 4.21 Blue 4.45 5.86 SNR Red 7.04 7.53 Green 11.39 11.4 Blue 10.13 7.70 Spectral discrepancy Red 3.72 3.46 Green 2.54 2.53 Blue 2.22 3.35 ERGAS 2.82 2.98 RASE 10.63 11.61 Variance Red (21.17) 31.83 25.882 Green (17.81) 0.002 0.002 Blue (10) 3.5 12.5 In this study, the following evaluation factors were used for the qualitative and quantitative analysis of the discussed methods; Chi-square test, RMSE, SNR, SD, ERGAS, and relative average spectral error (RASE) for assessing the fusion results. Table 1 shows the fusion result based on Case 1 and Case 2 of the proposed fusion method. In Table 1, columns correspond to the evaluation criteria discussed in Section 4, the multispectral band considered and the results of Case 1 and Case 2 of the proposed method. The rows of Table 1 correspond to the evaluation criteria, viz. Chi-square test, RMSE, SNR, SD, ERGAS, relative average spectral error (RASE) as well as the red, green, and blue band data for each criterion used. Table Advances in Technology Innovation, vol. 5, no. 3, 2020, pp. 166-181 177 2 shows the performance of the existing fusion methods, viz. HIS, Brovey, SFIM, HPS, PCA, and multiplicative methods under the evaluation criteria mentioned in the previous section. The columns of Table 2 correspond to the existing fusion methods listed above, and the rows of Table 2 correspond to the different evaluation criteria, viz. Chi-square test, RMSE, SNR, SD, ERGAS, and relative average spectral error (RASE) as explained in Section 4. The comparisons made from Table 1 and Table 2 for the dataset used in this work can be summarized as follows. Table 2 Performance evaluation of HIS, Brovey, SFIM, HPF, PCA and Multiplication methods Evaluation criteria Multispectral Band IHS Brovey SFIM HPF PCA Multiplication Chi square test (p value) Red 0.92 1 0.56 1 1 0 Green 0.09 1 0.99 1 1 0.1 Blue 0.15 1 0.97 1 1 7x10 -4 RMSE Red 5.38 23.5 7.19 16.02 26.27 3.15 Green 5.39 33.38 10.12 23.09 32.88 5.65 Blue 5.46 31.71 9.56 21.89 43.88 4.51 SNR Red 5.72 0.43 5.31 1.12 2.28 11.40 Green 8.28 0.44 5.36 1.07 2.45 7.58 Blue 7.74 0.44 4.85 1.08 2.04 9.24 Spectral discrepancy Red 4.62 23.24 4.33 15.85 26.05 2.42 Green 4.62 33.19 6.08 22.99 32.84 5.19 Blue 4.63 31.57 5.77 21.83 43.87 4.02 RASE 14.04 233.21 6 31 15 4 Variance Red (21.17) 58.4 3.05 37.19 17.52 52.49 35.67 Green (17.81) 52.31 4.13 53.07 24.68 17.55 42.63 Blue (10) 38.04 3.23 50.45 23.51 10.05 41.58 Table 1 shows the performance evaluation of the proposed method (two cases) based on the evaluation parameters used. Table 2 gives the performance evaluation of the state-of-art methods, namely HIS, Brovey, SFIM, HPF, PCA, and multiplication methods using the evaluation indices discussed. The evaluation indices discussed in Section 4 have been applied to our proposed method and each of the existing method mentioned in the previous section. The evaluation of each methods is done using the same dataset so that a comparative analysis can be made from the results shown in Table 1 and Table 2. A numerical analysis between the proposed method and the state-of-art methods based on the evaluation criteria mentioned above is as follows. 5.1. Chi-square test From Table 1 and Table 2, it can be seen that Case 1 and Case 2 of the proposed method give the smallest p-value for all the XS bands. Except for the blue band, the multiplication method gives the least p-value of 0.0007. The p-value indicates the probability of getting a better fit. Smaller the p-value better the goodness of fit. Therefore, there is a small chance that other methods will provide a better fit than the proposed method. On the other hand, Brovey, HPF, and PCA methods correspond to a maximum p-value of 1 for all the XS bands. Therefore, the fit is poorer in those cases compared to the proposed cases. 5.2. Root mean square error (RMSE) The RMSE value indicates the error between the computed and the observed data. It shows how far the computed data deviate from the observed data. Larger the value of RMSE, greater the error between the dataset points. The RMSE for Case 1 was found to be 4.75, 4.21, and 4.45, respectively for the red, green, and blue band while Case 2 gives corresponding values of 4.43, 4.21, and 5.86. The RMSE of the proposed method was found to be similar compared to that of the multiplicative method which gives a value of 3.15, 5.65, and 4.51, respectively for the red, green, and blue band. But the proposed two cases give a lower value compared to the remaining methods. The Brovey and PCA methods result in the largest RMSE value amongst all the methods indicating that the difference between the original XS and the computed XS is very large. For example, the Brovey method gives an RMSE value of 23.5, 33.38, and 31.71, respectively for the red, green and blue band. Advances in Technology Innovation, vol. 5, no. 3, 2020, pp. 166-181 178 5.3. Signal to noise ratio (SNR) The SNR value gives the amount of information contained in the fused output. The SNR is the highest for the proposed two cases with a value of 7.04, 11.39, and 10.13 for Case 1; 7.53, 11.4 and 7.70 for Case 2 for the three bands respectively. The multiplication method also results in similar values of 11.40, 7.58, and 9.24, respectively for the three bands. Brovey, HPF, and PCA result in the least SNR value. For example, the HPF method results with a lower SNR value of 2.28, 2.45, and 2.04 for the red, green, and blue band, respectively. 5.4. Spectral discrepancy (SD) A low value of SD indicates a good spectral quality. SD is the lowest for the proposed method followed by the multiplication method, while Brovey and PCA methods give the highest discrepancies. Two proposed cases result in the values of 3.72, 2.54, and 2.22 for Case 1; 3.46, 2.53, and 3.35 for Case 2 for the red, green, and blue band, respectively. On the other hand, the existing methods give a higher value than the proposed methods. For example, a PCA based method gives a value of 26.05, 32.84, and 43.87, respectively for the red, green and blue band. As mentioned above, SD indicates the level of spectral quality of the results. Higher the value, lower the spectral information. 5.5. Erreur relative globale adimensionelle de synthese (ERGAS) The ERGAS values of the proposed two cases compare quite well with a value of 2.82 for Case 1, and 2.98 for Case 2 with the multiplication method value of 3. The remaining methods give a higher ERGAS value, for instance, HPF and PCA give a value of 23 and 22, respectively. A low ERGAS value indicates a good fusion output. Brovey, HPF, and PCA methods result in a maximum ERGAS value and hence indicate a poor output. 5.6. Relative average spectral error (RASE) The SFIM and multiplication methods give a lower RASE value of 6 and 4, respectively against the proposed two cases (Case 1 and Case 2) with a value of 10.63 and 11.61. The Brovey method results in the maximum RASE value of 233.31. Similar to ERGAS, a low RASE value indicates a good result of the output. In terms of variance matching, for both the two cases of the proposed fusion method, the variance of green bands is almost constant with a very small value of 0.002 against the actual variance of 17.81. However, compared to the other methods, Case 2 gives the best approximation for red and blue bands of 25.88 and 12.5 against the original variance values of 21.17 and 10, respectively. IHS gives a larger value than the original multispectral variance. And, Brovey method results in the lower variance value compared to the actual multispectral variance. Though PCA seems to result in a closer approximation to the actual variance for green and blue bands, the result is not significant since other evaluation parameters of PCA such as the p-value, RMSE, SD, ERGAS, and RASE values do not relate well with its variance data. Case 2 of the proposed method gives a better approximation to the actual variance for red and blue bands. There are certain specific cases where the existing fusion methods seem to indicate a better result than that of the proposed method. For example, the p-value for the blue band obtained by the IHS method and multiplication method is smaller than the p-value of the proposed method of Case 2. The critical value (CV) obtained through HIS and multiplication methods are small, so the p-value results are small. The RMSE value provided by the multiplication method for the red band is also smaller than that provided by the proposed method. The multiplication transform method also gives a higher SNR value for the red band. Because the RMSE value of the multiplication method is smaller and according to SNR definition, smaller RMSE results in large SNR. Lastly, the SFIM and multiplication transform methods give a lower RASE value compared to the proposed method. However, in summary, it is clear that overall the proposed Cases of 1 and 2 outperform all the existing methods. Advances in Technology Innovation, vol. 5, no. 3, 2020, pp. 166-181 179 6. Conclusion In this paper, a new pixel value-based image fusion method of a high-resolution panchromatic band and a low-resolution multispectral band were proposed. A linear regression relationship between the panchromatic and multispectral band was formulated. A Lagrange multiplier based objective function which seeks to maximize spectral consistency and tracks variance of the given data independently was formed. The results of the proposed pan-sharpening method were compared with a few existing image fusion methods such as HIS, Brovey, SFIM, HPF, PCA, and multiplication methods on a common set of IKONOS satellite data. The comparison was made based on seven evaluation criteria. Considering overall performance, the proposed method came out favorably when compared with all other existing methods because the majority of the evaluation indices gave a better result for the proposed method. However, there were a few cases of isolated improved data for the existing methods on certain criteria compared to the proposed method as pointed out under the discussion section. Nevertheless, improved results of the existing methods seemed to be sporadic and not supported all criteria, so the performance of the existing methods put into question. Hence, it can be concluded that the proposed method outperforms the existing methods based on common data and independent criteria. As future work, it is necessary to study how far the improved performance of the proposed method is robust in the sense that if the input data is changed how will it affect the comparative performance. Conflicts of Interest The authors declare no conflict of interest. References [1] L. Wald, “Data fusion: a conceptual approach for an efficient exploitation of remote sensing images,” 2nd International Conference, France, pp. 17-24, 1998. [2] C. Pohl and J. L. V. Genderen, “Review article multisensor image fusion in remote sensing: concepts, methods and applications,” International Journal of Remote Sensing, vol. 19, no. 5, pp. 823-854, 1998. [3] T. M. Lillesan. W. J. Carper, and R. W. Kiefer “The use of intensity-hue-saturation transformations for merging SPOT panchromatic and multispectral image data,” Photogrammetric Engineering and Remote Sensing, vol. 56, no. 4, pp. 459-467, April 1990. [4] J. P. Chavez and P. S. Chavez “Comparison of the spectral information content of landsat thematic mapper and SPOT for three different sites in the Phoenix, Arizona region,” Photogrammetric Engineering & Remote Sensing, vol. 54, no. 12, pp. 1699-1708, January 1988. [5] K. Edwards and P. A. Davis, “The use of intensity-hue-saturation transformation for producing color shaded-relief images,” Photogrammetric Engineering & Remote Sensing, vol. 60, pp. 1369-1374, November 1994. [6] T. M. Tu, P. S. Huang, C. H. Huang, and C. P. Chang, “A fast intensity-hue-saturation fusion technique with spectral adjustment for IKONOS imagery,” IEEE Geoscience and Remote Sensing Letters, vol. 1, no. 4, pp. 309-312, October 2004. [7] A. R. Gillespie, A. B. Kahle, and R. E. Walker, “Color enhancement of highly correlated images. II. channel ratio and “chromaticity” transformation techniques,” Remote Sensing of Environment, vol. 22, no. 3, pp. 343-365, August 1987. [8] D. B. Yuan, X. Hong, S. Yu, L. Li, and Y. Zhao, “Analysis of four remote image fusion algorithms for Landsat7 ETM+ PAN and Multi-spectral imagery,” International Journal of Online Engineering, vol. 10, no. 3, p. 49, 2014. [9] J. Qu, Y. Li, and W. Dong, “Guided filter and principal component analysis hybrid method for hyperspectral pansharpening,” Journal of Applied Remote Sensing, vol. 12, no. 1 pp. 1-18, 2018. [10] M. Ghadjati, A. Moussaoui, and A. Boukharouba, “A novel iterative PCA-based pansharpening method,” Remote Sensing Letters, vol. 10, no. 3, pp. 264-273, 2019. [11] M. B. Devi and R. Devanathan, “Pansharpening using data-centric optimization approach,” International Journal of Remote Sensing, vol. 40, no. 20, pp. 7784-7804, 2019. [12] J. P. Chavez, S. C. Sides, and J. A. Anderson, “Comparison of three different methods to merge multiresolution and multispectral data: Landsat TM and SPOT panchromatic,” Photogrammetric Engineering and Remote Sensing, vol. 57, no. 3, pp. 265-303, March 1991. Advances in Technology Innovation, vol. 5, no. 3, 2020, pp. 166-181 180 [13] G. Hong, Y. Zhang, and B. Mercer, “A wavelet and IHS integration method to fuse high resolution SAR with moderate resolution multispectral images,” Photogrammetric Engineering & Remote Sensing, vol. 75, no. 10, pp. 1213-1223, October 2009. [14] Y. Zhang and G. Hong, “An IHS and wavelet integrated approach to improve pan-sharpening visual quality of natural colour IKONOS and QuickBird images,” Information Fusion, vol. 6, no. 3, pp. 225-234, September 2005. [15] S. Ourabia and Y. Smara, “A new pansharpening approach based on nonsubsampled contourlet transform using enhanced PCA applied to SPOT and ALSAT-2A satellite images,” Journal of the Indian Society of Remote Sensing, vol. 44, no. 5, pp. 665-674, October 2016. [16] A. Svab Lenarcic and K. Oštir, “High-resolution image fusion: methods to preserve spectral and spatial resolution,” Photogrammetric Engineering and Remote Sensing, vol. 72, no. 5, pp. 565-572, May 2006. [17] C. Yang, Q. Zhan, H. Liu, and R. Ma, “An IHS-Based pansharpening method for spectral fidelity improvement using ripplet transform and compressed sensing,” Sensors, vol. 18, no. 11, pp. 1-20, November 2018. [18] T. Wang, F. Fang, F. Li, and G. Zhang, “High-quality Bayesian pansharpening,” IEEE Trans Image Process, vol. 28, no. 1, pp. 227-239, January 2019. [19] K. Nikolakopoulos, “Comparison of nine fusion techniques for very high resolution data,” Photogrammetric Engineering & Remote Sensing, vol. 74, pp. 647-659, 2008. [20] H. Li, L. Jing, and Y. Tang, “Assessment of pansharpening methods applied to WorldView-2 imagery fusion,” Sensors, vol. 17, no. 1, pp. 1-30, January 2017. [21] E. I. Ulzurrun, C. G. Martin, J. M. Ruiz, A. G. Pedrero, and D. R. Esparragon, “Fusion of high resolution multispectral imagery in vulnerable coastal and land ecosystems,” Sensors, vol. 17, no.2, pp. 1-23, February 2017. [22] H. Li, L. Jing, Y. Tang, and H. Ding, “An improved pansharpening method for misaligned panchromatic and multispectral data,” Sensors, vol. 18, no. 2, pp. 1-16, February 2018. [23] C. Chen, Y. Li, W. Liu, and J. Huang, “Image fusion with local spectral consistency and dynamic gradient sparsity,” 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2760-2765, 2014. [24] X. Meng, J. Li, H. Shen, L. Zhang, and H. Zhang, “Pansharpening with a guided filter based on three-layer decomposition,” Sensors, vol. 16, no. 7, pp. 1-15, July 2016. [25] A. Sekrecka and M. Kedzierski, “Integration of satellite data with high resolution ratio: improvement of spectral quality with preserving spatial details,”, Sensors, vol. 18, no. 12, pp. 1-22, December 2018. [26] C. Ballester, V. Caselles, J. Verdera, and B. Rougé, “A variational model for P+XS image fusion,” International Journal of Computer Vision, vol. 69, pp. 43-58, 2006. [27] F. Fang, F. Li, C. Shen, and G. Zhang, “A variational approach for pansharpening,” IEEE Transactions on Image Processing, vol. 22, no. 7, pp. 2822-2834, 2013. [28] M. Möller, T. Wittman, A. L. Bertozzi, and M. Burger, “A variational approach for sharpening high dimensional images,” SIAM Journal on Imaging Sciences, vol. 5, no. 1, pp. 150-178, 2012. [29] L. J. Deng, G. Vivone, W. Guo, M. Dalla Mura, and J. Chanussot, “A variational pansharpening approach based on reproducible Kernel Hilbert space and Heaviside function,” IEEE Trans Image Process, vol. 27, no. 9, pp. 4330-4344, September 2018. [30] C. Kwan, J. H. Choi, S. H. Chan, J. Zhou, and B. Budavari, “A super-resolution and fusion approach to enhancing hyperspectral images,” Remote Sensing, vol. 10, no. 9, pp. 1-28, September 2018. [31] Y. Qu, H. Qi, and C. Kwan, “Unsupervised and unregistered hyperspectral image super-resolution with Mutual Dirichlet-Net,” Computer Vision and Pattern recognition, 2019. [32] R. Borsoi, T. Imbiriba, and J. Carlos Moreira Bermudez, “Super-resolution for hyperspectral and multispectral image fusion accounting for seasonal spectral variability,” IEEE Transactions on Image Processing, vol. 29, pp. 116-127, July 2019. [33] H. AanÆs, J. R. Sveinsson, A. A. Nielsen, T. Bovith, and J. A. Benediktsson, “Model-based satellite image fusion,” IEEE Transactions on Geoscience and Remote Sensing,” vol. 46, no. 5, pp. 1336-1346, 2008. [34] H. Aanaes, A. A. Nielsen, T. Bovith, J. R. Sveinsson, and J. A. Benediktsson, “Spectrally consistent satellite image fusion with improved image priors,” Proc. of the 7th Nordic Signal Processing Symposium-NORSIG 2006, IEEE Press, Janury 2006, pp. 14-17. [35] G. Cliche, F. Bonn, and P. Teillet, “Integration of the SPOT panchromatic channel into its multispectral mode for image sharpness enhancement,” Photogrammetric Engineering & Remote Sensing, vol. 51, pp. 311-316, 1985. [36] X. Meng, H. Shen, H. Li, L. Zhang, and R. Fu, “Review of the pansharpening methods for remote sensing images based on the idea of meta-analysis: practical discussion and challenges,” Information Fusion, vol. 46, pp. 102-113, 2018. Advances in Technology Innovation, vol. 5, no. 3, 2020, pp. 166-181 181 [37] P. Ghamisi, B. Rasti, N. Yokoya, Q. Wang, B. Hofle, L. Bruzzone, and P. M. Atkinson, “Multisource and multitemporal data fusion in remote sensing: A comprehensive review of the state of the art,” IEEE Geoscience and Remote Sensing Magazine, vol. 7, no. 1, pp. 6-39, March 2019. [38] G. Vivone, L. Alparone, J. Chanussot, M. Dalla Mura, A. Garzelli, G. A. Licciardi, and L. Wald, “A critical comparison among pansharpening algorithms,” IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 5, pp. 2565-2586, December 2014. [39] S. Kahraman and A. Erturk, “Review and performance comparison of pansharpening algorithms for RASAT images,” Journal of Electrical & Electronics Engineering, vol. 18, no. 1, pp. 109-120, 2018. [40] R. L. Plackett, “International Statistical Review”, vol. 51, pp. 59-72, 1983. [41] M. Fallah and A. Azizi, “Quality assessment of image fusion techniques for multisensor high resolution satellite images (Case study: IRS-P5 AND IRS-P6 satellite images),” In: Wagner W., Székely, B. (eds.): ISPRS TC VII Symposium – 100 Years ISPRS, Vienna, Austria, vol. 38, January 2010. [42] Yuhendra, I. Alimuddin, J. T. S. Sumantyo, and H. Kuze, “Assessment of pan-sharpening methods applied to image fusion of remotely sensed multi-band data,” International Journal of Applied Earth Observation and Geoinformation, vol. 18, pp. 165-175, 2012. [43] T. Ranchin, L. Wald, and M. Mangolini, “The ARSIS method: a general solution for improving spatial resolution of images by the means of sensor fusion,” Fusion of Earth Data, Cannes, France, pp. 53-58, February 1996. [44] C. Myungjin, “A new intensity-hue-saturation fusion approach to image fusion with a tradeoff parameter,” IEEE Transactions on Geoscience and Remote Sensing, vol. 44, no. 6, pp. 1672-1682, 2006. [45] A. Vesteinsson, J. R. Sveinsson, J. A. Benediktsson, and H. Aanaes, “Spectral consistent satellite image fusion: using a high resolution panchromatic and low resolution multi-spectral images,” Proc. 2005 IEEE International Geoscience and Remote Sensing Symposium, vol. 4, pp. 2834-2837, July 2005. Copyright© by the authors. Licensee TAETI, Taiwan. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY-NC) license (https://creativecommons.org/licenses/by-nc/4.0/).