85- 95 Al-Khwarizmi Engineering Journal,Vol. 11, No. Multi-Focus Image Fusion Based on Pixel Significance Using Department of Computer Science/ (Received Abstract The objective of image fusion is to representation contains higher amount of useful information than any input one. fusion method is proposed. It depends on using weights that are extracted from source images using counterlet transform. The extraction method is done by making the approximated transformed coefficients equal to zero, then taking the inverse counterlet transform to get the details of the i algorithm has been verified on several grey scale and color test Keywords: image fusion, counterlet transform (CT), pixel significance, weighted average. 1. Introduction Image fusion is known as the process by which we combine information from multiple images into a smaller group of images, generally an odd one. This image retains the most eligible information from the inputs with fewer In addition to decreasing the amount of data, fusion algorithm creates more convenient images for machine or human perception and for image processing. In the CCD devices, lenses have limited depth-of focus difficult to obtain all relevant objects in focus in a one image. But by fusing all focused objects from the sequence of images obtained by gradually shifting the focal plane over the s produce the desired image. This leads issue of multifocus image fusion [1]. have been executed in the field of multi image fusion, as well as image fusion for fields such as medical images, remote sensing, and fusion of thermal and visible images for surveillance application.Image fusion can be executed at any of the three processing levels: signal (pixel), feature (object) and decision (symbol). Pixel level fusion, defines the process Khwarizmi Engineering Journal,Vol. 11, No. 3, P.P. 85- 95 (2015) Focus Image Fusion Based on Pixel Significance Using Counterlet Transform Iman M.G. Alwan Department of Computer Science/ College of Education for Women/ University of Baghdad Email:ainms_66@yahoo.com (Received 8 May 2014; accepted 16 April 2015) The objective of image fusion is to merge multiple sources of images together in such a way that the final representation contains higher amount of useful information than any input one.. In this paper, nds on using weights that are extracted from source images using counterlet . The extraction method is done by making the approximated transformed coefficients equal to zero, then taking the inverse counterlet transform to get the details of the images to be fused. The performance of the proposed grey scale and color test images, and compared with some image fusion, counterlet transform (CT), pixel significance, weighted average. fusion is known as the process by which information from multiple images into a smaller group of images, generally an odd retains the most eligible fewer artifacts. he amount of data, hm creates more convenient images for machine or human perception and for extra In the CCD devices, the optical of focus. So, it’s objects in focus in a fusing all focused objects from of images obtained by gradually the sight, we can e desired image. This leads to the [1]. Many works have been executed in the field of multi-focus as well as image fusion for other fields such as medical images, remote sensing, and fusion of thermal and visible images for surveillance application.Image fusion can be any of the three processing levels: signal (pixel), feature (object) and decision (symbol). Pixel level fusion, defines the process of fusing visual information associated with each pixel from a number of registered images into a single fused image, repres level [2,3,4]. Since transform domain is useful tool for representing, analysis and interpretation of information, transforms have been used gradient, contrast ratio of low pass pyramids discrete wavelet transform and geometric analysis transforms image fusion rules can be implemented such as ‘maximum’ or ‘mean’ where the fused coefficient is the maximum or the average of the source coefficients respectively. rule is susceptible to noise while the mean fusion rule reduces contrast, so another fu ‘mean-max’, is used. In t for frequency of low band frequency of high band. Another method by researches [9, 10] is the ‘weighted average’ of the source coefficients. In [11] adaptive weighted average in wavelet domain is proposed by computing pixel’s significance through available information at bands of finer resolution rule which can efficiently fuse multifocus i by calculating weighted average of pixels Al-Khwarizmi Engineering Journal (2015) Focus Image Fusion Based on Pixel Significance Using University of Baghdad multiple sources of images together in such a way that the final . In this paper, a weighted average nds on using weights that are extracted from source images using counterlet . The extraction method is done by making the approximated transformed coefficients equal to zero, then mages to be fused. The performance of the proposed some present methods. of fusing visual information associated with each pixel from a number of registered images into a single fused image, representing a fusion at lowest level [2,3,4]. Since transform domain is very representing, analysis and interpretation of information, various multiscale been used, like Laplacian, gradient, contrast ratio of low pass pyramids, discrete wavelet transform and multiscale geometric analysis transforms [5,6,7,8]. Various image fusion rules can be implemented such as ‘maximum’ or ‘mean’ where the fused coefficient is the maximum or the average of the source The maximum fusion rule is susceptible to noise while the mean fusion rule reduces contrast, so another fusion rule, this rule ‘mean’ is used frequency of low band and ‘maximum’ is for band. Another method proposed the ‘weighted average’ of . In [11] adaptive weighted average in wavelet domain is proposed by pixel’s significance through available information at bands of finer resolution. A fusion e which can efficiently fuse multifocus images weighted average of pixels in Iman M.G. Alwan wavelet domain. The weights are adaptively decided using the statistical properties of the neighborhood. The main idea is that the estimate’s Eigen value of the covariance matrix of an image block depends on the strength of edges in the block and thus makes a suitable weight to be given to the pixel. This gives weightage to pixel with sharper [12]. This method increases the quality, especially sharpness with minimum fusion artifacts. In [13] a proposed weighted average method was represented. It computes the weights by measuring the strength of details in a detail image obtained by subtracting Cross Bilateral Filter (CBF) output from original image. The weights thus computed are multiplied directly with the original source images followed by weight normalization. This method has shown good performance. In this paper, a novel method of obtaining a detail image is propos on using counterlet transform to the images to be fused, since counterlet transform is multi direction and anisotropy in addition to multi and localization properities, it can contours of original images effectively with a coefficients [14]. The proposed algorithm depends on applying CT to both multifocus images, make approximation bands equal to zero then apply inverse counterlet transform to obtain the detail images, then the weights can be computed to fuse images . 2. Counterlet Transform The Contourlet transform, which is proposed by Do and Vetterli [14], is a multi-directional and a multi-resolution transform. It provides approximation of images made of smooth contours, through implementing Laplacian Pyramid (LP) decomposition followed by Directional Filter Bank (DFB) applied on each subband’s bandpass Fig.1(a). The images are decomposed into subbands by LP and e image is analyzed by DFB. The DFB has the property of capturing high frequency of input image while it leaks low frequency of signals, thus the DFB is merged with the LP. It means removing the low frequency from the input image before applying DFB. Fig. 1(b) resulting frequency division, where the Al-Khwarizmi Engineering Journal, Vol. 11, No. 86 . The weights are adaptively decided using the statistical properties of the idea is that the unbiased f the covariance matrix of an image block depends on the strength of edges the block and thus makes a suitable choice for weight to be given to the pixel. This gives more neighborhood fused image , especially sharpness with minimum fusion artifacts. In [13] a proposed weighted average method was represented. It computes the weights by measuring the strength of details in a detail image obtained by subtracting Cross CBF) output from original image. The weights thus computed are multiplied directly with the original source images followed by weight normalization. This method has shown good performance. In this paper, a novel method of obtaining a detail image is proposed; it depends on using counterlet transform to the images to be fused, since counterlet transform is multi- direction and anisotropy in addition to multi-scale and localization properities, it can capture the of original images effectively with a few [14]. The proposed algorithm depends on applying CT to both multifocus images, make approximation bands equal to zero then apply inverse counterlet transform to obtain the detail images, then the weights can be computed to fuse The Contourlet transform, which is proposed directional and resolution transform. It provides efficient approximation of images made of smooth contours, through implementing Laplacian Pyramid (LP) decomposition followed by Directional Filter Bank (DFB) applied on each The images are decomposed into subbands by LP and each detail The DFB has the property of capturing high frequency of input leaks low frequency of signals, the DFB is merged with the LP. It means removing the low frequency from the input image Fig. 1(b) shows the resulting frequency division, where the whole spectrum is divided both angularly and radially and the number of directions is frequency. The discrete contourlet transform makes perfect reconstruction with less than 4/ redundancy ratio [15]. Fig. 1. The counterlet transfosrm: (a) Block diagram, (b) Resulting frequency division. A. Laplacian Pyramid Laplacian pyramid (LP), which was introduced by Burt and Adelson [16], achieves multiscale decomposition. The LP decomposition at each level results a band pass image by generating a down sampled low pass version of the original and the difference between the original and the prediction, as in Fig. 2(a). In this figure (a) and (b) are the analysis and synthesis the sampling matrix. The process can be iterated on a coarse version. In Fig.2 (a) the outputs are a coarse approximation ‘a1’ and a difference ‘a2’ between the original signal and prediction. The process can be iterated by decomposi version repeatedly. The original image is convolved with a Gaussian kernel. The resulting image is a low pass filtered version of the original image. The Laplacian will be the difference between the original image and the low pass filtered image. A set of band pass filtered images is obtained by continuing the above process. applying these steps several times, a sequence of images are obtained [15]. Khwarizmi Engineering Journal, Vol. 11, No. 3, P.P. 85- 95 (2015) spectrum is divided both angularly and radially and the number of directions is increased with The discrete contourlet transform makes perfect reconstruction with less than 4/3 1. The counterlet transfosrm: (a) Block diagram, (b) Resulting frequency division. Laplacian pyramid (LP), which was introduced by Burt and Adelson [16], achieves multiscale LP decomposition at each level results a band pass image by generating a down sampled low pass version of the original and the difference between the original and the prediction, as in Fig. 2(a). In this figure (a) and (b) are the analysis and synthesis filters, while ‘S’ is the sampling matrix. The process can be iterated on a coarse version. In Fig.2 (a) the outputs are a coarse approximation ‘a1’ and a difference ‘a2’ between the original signal and prediction. The process can be iterated by decomposing the coarse version repeatedly. The original image is convolved with a Gaussian kernel. The resulting image is a low pass filtered version of the original image. The Laplacian will be the difference between the original image and the low pass age. A set of band pass filtered images is obtained by continuing the above process. After applying these steps several times, a sequence of Iman M.G. Alwan Fig. 2. Laplacian Pyramid. (a) One level of decomposition. (b) The reconstruction of Laplacian pyramid. B. Directional Filter Bank (DFB) The directional filter bank is a critically sampled filter bank. Images can be decomposed into directions of any power of two’s number DFB leads to partitions of wedge frequency by implementing decomposition of n level tree structure. A rule of tree expanding should be followed to get the required frequency partition. The high frequency content (smooth countours and directional edges) can be captured by DFB while low frequency components are exhausted poorly by the DFB [15]. Fig. 3. The block diagram of the proposed fusion system ��� ��� A B B wB wA A Make LL subbands of A,B equal to zero Discrete Counterlet Transform Al-Khwarizmi Engineering Journal, Vol. 11, No. 87 2. Laplacian Pyramid. (a) One level of decomposition. (b) The reconstruction of Laplacian B. Directional Filter Bank (DFB) The directional filter bank is a critically Images can be decomposed into directions of any power of two’s number. The ions of wedge-shaped frequency by implementing decomposition of n- level tree structure. A rule of tree expanding should be followed to get the required frequency The high frequency content (smooth countours and directional edges) can be captured by DFB while low frequency components are 3. The Proposed Algorithm In [13] the proposed image fusion algorithm directly fuses two source images of a same scene using weighted average. The proposed method differs from other weighted average methods in terms of weight computation and the domain of weighted average. The weights are computed by measuring the strength of details in a detail image obtained by subtracting Cross Bilateral Filter (CBF) output from original imag thus computed are multiplied directly with the original source images followed by weight normalization. The idea is to capture most of the focused area details in detail image such that these details can be used to find the weights for image fusion using weighted average proposed in [12]. In this paper, the weights are computed by measuring the strength of details in a detail image obtained by applying the the images to be fused. The detail image, obtained by making the coefficient values of approximation band equal to zero then applying the inverse of counterlet transform, for image A and B to be fused. I unfocused area in image A will be focused in image B and the application of counterlet transform in such a manner on both images will capture the details which will be used to find the weights by measuring the strength of details. Fig. 3 shows the block diagram of the proposed system. The block diagram of the proposed fusion system. Make LL subbands of A,B equal to zero Inverse Discrete counterlet transform ����� � ����� � Fused Multifocus Images A, B Depending on wA, wB Khwarizmi Engineering Journal, Vol. 11, No. 3, P.P. 85- 95 (2015) The Proposed Algorithm In [13] the proposed image fusion algorithm directly fuses two source images of a same scene using weighted average. The proposed method other weighted average methods in terms of weight computation and the domain of weighted average. The weights are computed by measuring the strength of details in a detail image obtained by subtracting Cross Bilateral Filter (CBF) output from original image. The weights thus computed are multiplied directly with the original source images followed by weight normalization. The idea is to capture most of the focused area details in detail image such that these details can be used to find the weights for image fusion using weighted average proposed in [12]. In this paper, the weights are computed by measuring the strength of details in a detail image counterlet transform to the images to be fused. The detail image, obtained he coefficient values of approximation band equal to zero then applying the inverse of counterlet transform, for image A In multifocus images, unfocused area in image A will be focused in image B and the application of counterlet rm in such a manner on both images will capture the details which will be used to find the weights by measuring the strength of details. Fig. 3 shows the block diagram of the proposed wA wB Find weights by measuring strength of details Fused Image Iman M.G. Alwan Fig. 4 shows the details image obtained in the proposed algorithm with six numbers of levels for directional filter bank at each pyramid level. In order to show the power of counterlet transform in representing edges and other singularities along curves of images. Wavelet transform (Db2) is used to capture the details of the multifocuse images to be fused. Fig. 5 shows the details i obtained by extracting the high frequency bands of the image through making the low frequency bands equal to zero in wavelet domain. One can notice that counterlet transform is more efficient in representing details than wavelet transform. Fig. 4. multifocus clock source images in (a) and (b), details images in (c) and (d).using counterlet transform. Fig. 5. multifocus clock source images in (a) and (b), details images in (c) and using wavelet transform. (a) (c) (a) (c) (d) (b) (d) (d) (b) Al-Khwarizmi Engineering Journal, Vol. 11, No. 88 Fig. 4 shows the details image obtained in the proposed algorithm with six numbers of levels for pyramid level. In order to show the power of counterlet transform in representing edges and other singularities along curves of images. Wavelet transform (Db2) is used to capture the details of the multifocuse images to be fused. Fig. 5 shows the details image obtained by extracting the high frequency bands of the image through making the low frequency bands equal to zero in wavelet domain. One can notice that counterlet transform is more efficient in representing details than wavelet transform. 4. multifocus clock source images in (a) and (b), details images in (c) and (d).using counterlet 5. multifocus clock source images in (a) and (c) and using wavelet Fusion rule proposed in [12] is adopted in [13] to compute the weights using statistical prosperities of a neighborhood of detail coefficient instead of wavelet coefficient. The fusion algorithm is also adopted in this pape the fusion algorithm a window of size around a detail coefficient considered as a neighborhood to compute its weight. This neighborhood is denoted as matrix X. Every row of X is treated as an observation and column as a variable to compute unbiased estimate � ,� of its covariance matrix, where are the spatial coordinates of the detail coefficient ����, �� or ����, ��. ������������� � !�� " � ,� � ∑ �$%&$̅��$(&$ )� *+ %,- �.&/� Where, �( is the 0 �� observation of the dimensional variable and mean. It is observed that, diagonal of matrix gives a variances vector for each column of matrix 1. Now, the eigenvalues of matrix computed and the number of eigenvalues depends on size of � ,� . Sum of these directly proportional to horizontal detail strength of the neighborhood and is denoted as 23�4��564���748 [12]. Similarly, unbiased covariance estimate 9 ,� each 1 column as an observation and row as a variable (opposite to that of eigenvalues of 9 ,� gives vertical detail strength :3�4��564���748. That is, 23�4��564���748��, �� � :3�4��564���748��, �� � Where ��7��( is the 0 unbiased estimate of covariance matrix. Now the weight given to a particular detail coefficient is computed by adding these t strengths. Therefore, the weight counts only on the details strength and not on actual intensity values. ;4��,�� � 23�4��564���74 :3�4��564���74 Khwarizmi Engineering Journal, Vol. 11, No. 3, P.P. 85- 95 (2015) Fusion rule proposed in [12] is adopted in [13] to compute the weights using statistical prosperities of a neighborhood of detail coefficient instead of wavelet coefficient. The fusion algorithm is also adopted in this paper. In the fusion algorithm a window of size ; < ; around a detail coefficient ����, �� or �� (�, �� is considered as a neighborhood to compute its weight. This neighborhood is denoted as matrix X. Every row of X is treated as an observation and column as a variable to compute unbiased of its covariance matrix, where �, � ial coordinates of the detail coefficient " !�=��� " !�=�� ... (1) ) ….(2) observation of the ; " dimensional variable and �̅ is the observations mean. It is observed that, diagonal of matrix � ,� gives a variances vector for each column of Now, the eigenvalues of matrix � ,� is computed and the number of eigenvalues depends . Sum of these eigenvalues is directly proportional to horizontal detail strength of the neighborhood and is denoted as [12]. Similarly, unbiased is computed by treating column as an observation and row as a variable (opposite to that of � ,� ), and the sum of gives vertical detail strength That is, � ∑ ��7��( . (>/ �? CA B,C ..(3) � ∑ ��7��( . (>/ �? CD B,C ..(4) 0�� eigenvalue of the unbiased estimate of covariance matrix. Now the weight given to a particular detail coefficient is computed by adding these two respective detail strengths. Therefore, the weight counts only on the details strength and not on actual intensity 23�4��564���748��, �� E :3�4��564���748��, �� … (5) Iman M.G. Alwan Al-Khwarizmi Engineering Journal, Vol. 11, No. 3, P.P. 85- 95 (2015) 89 After computing the weights for all detail coefficients corresponding to both the registered source images, the weighted average of the source images will result in a fused image. If ;4� and ;4G are the weights for the detail coefficients �� and �� belonging to the respective source images � and �, then the weighted average of both is computed as the fused image using Eq. 6 H��,�� = I� ,��.�J� ,��KL� ,��.�M� ,��.�J� ,��K.�M� ,�� ... (6) 4. Performance Measures In most of the applications, the evaluation of fusion performance is a challenging task as the ground truth is not available. In the literature many metrics are presented to evaluate the performance of image fusion [17, 18]. These classical evaluation metrics are considered here for extensive study, which are as follows [12]: 1) Average Pixel Intensity (API) or mean (F): an indicator of contrast. 2) Average Gradient (G): measure a degree of sharpness and clarity . 3) Standard Deviation (SD): this is the square root of the variance. It indicates data spread. 4) Entropy (H): estimates the amount of information available in the image. 5) Mutual Information (MI) or Fusion Factor: quantifies the overall mutual information between source image and fused images. 6) Fusion Symmetry (FS) or Information Symmetry: an indication of how much symmetric the fused image is with respect to source images. 7) Normalized Correlation (CORR): measures a relevance of fused image to source images. 8) Petrovic Metric Parameter QOPQ : measures the overall transferred information from source images to fused one. 9) Petrovic Metric Parameter LOPQ : measures edge information loss. 10) Petrovic Metric Parameter SILT : measures artifacts or noise added in fused image due to fusion process. Equations 7 to 20 are used to compute (1-7) parameters, assuming image size of �U���. �VW = HX = ∑ ∑ �YZ,[� \[,-]Z,- ^$_ ...(7) ? ,� is pixel intensity for position (i ,j) of image F. 6` = a∑ ∑ �Y� ,��&TX�b \[,-]Z,- ^_ …(8) c̅ = ∑ ∑ ��YZ,[&YZd-,[�bK�YZ,[&YZ,[d-�b�-/b[Z ^$_ ... (9) �4��fg = −∑ f?�?�5�7hhiiY>j f?�?� …(10) Where f?�?� is the probability of intensity value f in image F. kWIT = ∑ ∑ f�,H��,?�5�7h lI,T��,Y�lI���lT���Y� ...(11) kWLT = ∑ ∑ V�,H�m,?�5�7h lL,T�G,Y�lL�G�lT�Y�YG ...(12) kWILT = kWIT + kWLT ...(13) kWTI = ∑ ∑ fI,T��,?�5�7h no,p��,Y�no���np�Y�Y� ...(14) kWLT = ∑ ∑ fL,T�m,?�5�7hYG nq,p�G,Y�nq�G�np�Y �. ...(15) kWI,LT = kWIT + kWLT ...(16) kWIT is the mutual information between source image A and fused image B, while kWLT is the mutual information between source image B and fused image F.kWILT quantifies the overall mutual information measurement between source images and fused one. H6 = 2 − |kWIT �kWIT + kWLT� − 0.5⁄ | ...(17) �IT = ∑ ∑ w��Z,[�&I̅x�Y�Z,[�&TX�[Za��∑ ∑ ���Z,[�[Z &I̅�b�∑ ∑ �Y�Z,[�&TX�b�[Z ...(18) �LT = ∑ ∑ wG�Z,[�&LXx�Y�Z,[�&TX�[Za��∑ ∑ �G�Z,[�[Z &LX�b�∑ ∑ �Y�Z,[�&TX�b�[Z ...(19) Here �IT and �LT represents normalized correlation between source image and fused image, the overall average normalized correlation can be calculated as: yzz = ��IT + �LT�/2 ...(20) All the Petrovic Metrics, proposed by Petrovic and Xydeas [19], based on gradient information. This provides an in-depth analysis of fusion performance by quantifying: total fusion performance, fusion loss and fusion artifacts (artificial information created). A sobel edge Iman M.G. Alwan Al-Khwarizmi Engineering Journal, Vol. 11, No. 3, P.P. 85- 95 (2015) 90 detector is used to compute orientation and strength information at each pixel in source and the fused images respectively. 5. The Experimental Results The experimental evaluation is performed on multi focus grey-scale and color images of size 256<256 pixels. In the counterlet transform, the LP filter is ‘pkva’ and DFB is ‘9-7’ type, and the number of directional filter bank decomposition levels at each pyramidal level (from coarse to fine scale), by experimental test, is six for optimum results, neighborhood window is 5x5 to find detail strength. Fig. 6 displays the test multifocus grey scale images used and the resultant fused images, while Fig. 7 displays the test multifocus color images used and the resultant fused images. Book 1 Desk 1 Lena 1 Clock 1 Girl 1 Book 2 Desk 2 Lena 2 Clock 2 Girl 2 Fused Book Fused Desk Fused Lena Fused Clock Fused Girl Fig. 6. The multi focus test grey scale images, and resultant fused images. Iman M.G. Alwan Al-Khwarizmi Engineering Journal, Vol. 11, No. 3, P.P. 85- 95 (2015) 91 Ship 1 City1 container 1 cookies 1 Books 1 Ship 2 City 2 container 2 cookies2 Books 2 Fused Ship Fused City Fused container Fused Cookies Fused Books Fig. 7. The multi focus test color images, and resultant fused images. Tables (1, 2) show the evaluation results of the outputted fused images for both grey scale and color images respectively. The evaluation of the proposed method is done by comparing with results obtained through using image fusion toolbox, kindly provided by Kumar [13]. The quality of the fused images are better when these parameters have higher values, excluding {IL/T,SIL/T which should have lower values. In table (1), higher values are bolded except for {IL/T,SIL/T where lower values are bolded. Table 1, Performance Comparison of Test Grey scale Multifocus Images. Book API SD |) H MI FS CORR }~�� �~�� �~�� Proposed 84.3228 58.4854 12.2766 7.2700 9.0737 1.999 0.9918 0.986 0.014 1.6769e- 005 [13] 84.5140 59.4958 14.6905 7.2432 8.7287 1.998 0.9903 0.977 0.0232 2.7807e- 005 Desk Proposed 80.5457 67.5398 13.2131 7.2389 6.3178 1.971 0.9636 0.876 0.030 0.0034 [13] 81.3517 66.8859 13.2674 7.4217 6.1486 1.996 0.9643 0.885 0.113 0.0116 Lena Proposed 98.7034 51.4480 12.9027 7.5352 7.2215 1.983 0.9786 0.905 0.092 0.0345 [13] 98.8909 50.4837 12.7901 7.4998 6.6233 1.966 0.9791 0.907 0.091 0.0098 Clock Proposed 96.3000 49.7317 8.0268 7.2733 7.8923 1.973 0.9889 0.913 0.087 0.0047 [13] 96.5571 50.2380 9.0733 7.2972 7.8681 1.958 0.9876 0.908 0.091 0.0014 Girl Proposed 116.9926 50.9372 11.7083 7.5477 7.5616 1.995 0.9916 0.935 0.064 0.0037 [13] 117.5374 51.3023 14.7759 7.5751 7.4948 1.998 0.9876 0.941 0.057 0.0060 Iman M.G. Alwan Al-Khwarizmi Engineering Journal, Vol. 11, No. 3, P.P. 85- 95 (2015) 92 Table 2, Performance Comparison of Test colored Multifocus Images. Ship API SD |) H MI FS CORR }~�� �~�� �~�� Proposed 106.9133 48.443 8.7148 7.1714 8.1718 1.989 0.9886 0.889 0.109 0.0053 [13] 107.279 48.643 10.415 7.2026 8.2783 1.990 0.986 0.903 0.095 0.0085 City Proposed 114.7009 47.4911 13.074 7.5148 8.3899 1.981 0.9830 0.912 0.086 0.0127 [13] 115.3971 56.3815 17.839 7.7572 8.8351 1.980 0.9808 0.916 0.081 0.0112 Container Proposed 168.9581 53.1473 9.0436 7.4530 7.1750 1.986 0.9715 0.906 0.089 0.0256 [13] 169.6191 53.9541 10.924 7.4819 7.3406 1.990 0.9698 0.914 0.080 0.0257 Cookies Proposed 143.345 59.450 15.923 7.8245 9.9503 2.000 0.9826 0.969 0.038 0.0019 [13] 143.898 59.436 17.372 7.8375 9.875 1.981 0.981 0.961 0.036 0.0098 Books Proposed 162.848 51.9838 13.171 7.2913 6.7079 1.946 0.9821 0.924 0.079 0.0012 [13] 163.517 51.6543 15.071 7.327 6.4308 1.951 0.9798 0.916 0.082 0.0037 From the results, we can notice that, the performance of the proposed method is slightly better than [13] in terms of overall mutual information between source images and fused image (MI), while for other performance measures, the proposed method is slightly better or less than proposed method in [13] in few tenths. Also the proposed method was compared with methods in [5] where it is proposed to fuse multifocus images in the multiresolution DCT domain with pixel level fusion rule, and [11], [12] for ‘book, desk’ images. Table (3) shows the comparison results. Table 3, Performance comparison of proposed method and methods in [5, 11, 12]. Book API SD |) H MI FS CORR }~�� �~�� �~�� Proposed 84.3228 58.4854 12.2766 7.2700 9.0737 1.999 0.9918 0.986 0.0143 1.6769e-005 [5] 85.1322 61.0523 11.7826 7.2996 7.3344 1.989 0.9821 0.885 0.1115 0.0033 [11] 84.7978 61.0248 11.7600 7.3045 7.0857 1.990 0.9819 0.884 0.1116 0.0045 [12] 85.4582 61.2135 11.8963 7.3141 7.2377 1.992 0.9821 0.880 0.1155 0.0045 Desk API SD |) H MI FS CORR }~�� �~�� �~�� Proposed 80.5457 67.5398 13.2131 7.2389 6.3178 1.971 0.9636 0.899 0.030 0.0034 [5] 81.4877 64.6112 13.0945 7.5262 5.2655 1.991 0.9659 0.888 0.1064 0.0048 [11] 80.8079 64.3038 12.8823 7.4497 4.8638 1.947 0.9636 0.857 0.1328 0.0102 [12] 80.6225 64.0681 12.8277 7.4846 4.8645 1.980 0.9656 0.876 0.1157 0.0078 From the results, we can perceive that there is very small change in values of parameters. The proposed algorithm offer highest value for c̅ and MI, which means highest sharpness and clarity degree, and highest overall mutual information between source images and fused one respectively. It also clearly offers superiority with respect to Petrovic Metrics, it gives highest value for quality factor �ILT, and lowest values for loss of edge information {ILT and noise or artifacts added due to fusion SILT. 6. Conclusion In this paper, multifocus image fusion was proposed, it depends on using detail images extracted from the source images by applying counterlet transform, making approximation subband equal to zero, then applying applying inverse counterlet transform, the resultant image is the detail image only. Weights are computed by measuring the strength of horizontal and vertical details, these weights are used to fuse the source images directly. The experimental results show that the proposed method offers Iman M.G. Alwan Al-Khwarizmi Engineering Journal, Vol. 11, No. 3, P.P. 85- 95 (2015) 93 superior or similar performance as compared to other methods in terms of visual quality and quantitative parameters. It offers good performance over other fusion methods. Notation API Average Pixel Intensity ��� ,��,��� ,�� Detail coefficients � ,�, 9 ,� unbiased covariance estimate CBF Cross Bilateral Filter Corr Normalized Correlation FS Fusion Symmetry G Average Gradient H Entropy MI Mutual Information SD Standard Deviation �ILT,{ILT,SILT Petrovic metric parameters ;��,;�G weights of detail coefficients of images A, B respectively. 7. References [1] Bedi, S. S., Khandelwal, R. ,” Comprehensive and Comparative Study of Image Fusion Techniques”, International Journal of Soft Computing and Engineering, vol.3,no.1, pp. 300-304, March 2013. [2] Li, H., Manjunath, B. S., and Mitra, S. K., “ Multisensor Image Fusion using the Wavelet Transform”, Graph. Models Image Proces ., vol. 57, no.3, pp. 235-245 1995. [3] Hamza, A. B., He Y., Krim, H., and Willsky, A.,” A Multiscale Approach to Pixel-level Image Fusion”. Integr. Comput. Aided Eng., vol.12, no.2, pp. 135-146, 2005. [4] Naidu, V.P.S.,” Discrete Cosine Transform-based Image Fusion”, Defense Science Journal, vol.60, no.1,pp. 48-54, 2010. [5] Kumar, B.K., Swang, M.S., and Ahmed, M.O.,” Multiresolution DCT Decomposition for Multifocus Image Fusion”, 26th IEEE Canadian Conference of Electrical and Computer Engineering, 2013 [6] Haghighat, M.A, Aghagolzadeh, A. and Seyedarabi,H.,” Multi-focus image fusion for visual sensor networks in DCT domain”, Computers and Electrical Engineering, Elsevier, vol.37, pp.789- 797,2011. [7] Wang,H., Peng, J. and Wu, W., ”Fusion algorithm for multisensor images based on discrete multiwavelet transform”, IEEE Proceedings Vision, Image, and Signal Processing,vol.149,no.5, pp. 283– 289,2002. [8] Dyla, M.H., Tairi,H.,” Multifocus Image Fusion Scheme Using a Combination of Nonsubsampled Counterlet Transform and An Image Decomposition Model” Journal of Theoretical and Applied Information Technology, vol.38 ,no.2 ,pp.136-144, April,2012. [9] CHENG Shangli, Junmin, H., and Zhongwei,L., ”Medical Image of PET/CT Weighted Fusion Based on Wavelet Transform”, Intnl. Conf. on Bioinformatics and Biomedical Engineering (ICBBE), pp. 2523–2525, 2008. [10] Shah, P., Merchant, S.N., and Desai, U.B., ”Fusion of Surveillance Images in Infrared and Visible Band using Curvelet, Wavelet and Wavelet Packet Transform”, International Journal of Wavelets, Multiresolution and Information Processing (IJWMIP),vol. 8, no.2, pp. 271–292, 2010. [11] Shah, P., Srikanth, T.V., Merchant, S.N., and Desai, U.B., “A Novel Multifocus Image Fusion Scheme Based on Pixel Significance Using Wavelet Transform”, IVMSP, pp. 54-59 ,2011. [12] Shah, P., Merchant, S.N., and Desai, U.B.,” An Efficient Adaptive Fusion Scheme for Multifocus Images in Wavelet Domain Using Statistical Properties of Neighborhood”, 14th International Conference on Information Fusion, IEEE, pp.1-7, 2011. [13] Kumar, B.K., ”Image Fusion Based on Pixel Significance Using Cross Bilateral Filter”, Signal, Image and Video Processing ,Springer, vol.7,no.5,September, 2013. [14] D. D.-Y. Po, DO, M. N., “Directional multiscale modeling of images using the contourlet transform,” IEEE Transactions on Image Processing, vol. 15, no. 6, pp. 1610–1620, 2006. [15] Tamilarasi, M., Palanisamy,V., “Contourlet Based Medical Image Compression Using Improved EZW”, Iman M.G. Alwan Al-Khwarizmi Engineering Journal, Vol. 11, No. 3, P.P. 85- 95 (2015) 94 International Conference on Advances in Recent Technologies in Communication and Computing, IEEE Computer Society, pp. 800-804, 2009. [16] Burt, P. J., and Adelson, E. H., “The laplacian pyramid as a compact image code,” IEEE Transactions on Communications, vol. 31, no. 4, pp. 532– 540, 1983. [17] V. Petrovic, “Multisensor Pixel-level Image Fusion,” PhD Thesis, Department of Imaging Science and Biomedical Engineering, Manchester School of Engineering, United Kingdom, 2001. [18] Kumar, B.K., “Multifocus and multispectral image fusion based on pixel significance using discrete cosine harmonic wavelet transform,” J. SIViP, 2012, doi: 10.1007/s11760-012-0361-x. [19] Petrovic, V., Xydeas, C., “Objective Image Fusion Performance Characterisation”, Proceedings of ICCV’05, vol. 2, pp. 1866–1871 , 2005. )2015( 85- 95، ���� 3، ا�� د11 ا���ارز� ا��� �� ا������ � ا�� ن ���د ���ر 95 � و ������ �� Counterletد�- ا�,�ر �%� دة ا�%*(�) '$&#%�$د #�" أھ��� ا�� ��0* #��انإ��$ن ��� � �����ت/ ��� ���م ا������ت��� ا� ���$#اد/ !� �%&�' ainms_66@yahoo.com: ا���-# ا,�+ �و() ا����1 �A . (2 �ي ��? !< ا�5%��&�ت ا�5;5� &9 ا��3ر ا�5 %#دة إن ا�;#ف &9 د&8 ا��3ر ھ� �6 ا�5%��&�ت ذات ا��3� ��3ر & %#دة 2) �1رة وا/#ة 3� &9 ا�B �5ا� >�1�C 9 �1ر ا�& �;���G %�5ل أوزان - � /�� H5# ��? ا���ب ا�5%#ل ا��5زون و ذ� %A 8&د �J-�اح ط�إ� �A ، Nا ا���O�3ر ھ� ��-��A �5ل% ��� �;P&اد د�ا�35#ر ا�5counterlet .و��& Q-�J ت ا�R&�%& >%' ?�� 9 ا��3ر ا�35#ر& >�1�C ص ا�RB إ� �J-�5# ط %A �C3�� �- ��-��A س�+%& U��VA �W 9&وcounterlet . 8&ق د�V��Oھ� ��? ا�%#-# &9 ا��3ر & %#دة ا� �!�Y و&�Jر( ;� C� � �/� J5ا� �J-�V9 أداء ا�& UJ� ا� �A . أ\�ى