Cultura e Scienza del Colore - Color Culture and Science 29 Cultura e Scienza del Colore - Color Culture and Science | 07 | 2017 ISSN 2384-9568 Michela Lecca lecca@fbk.eu Fondazione Bruno Kessler, ICT - Technologies of Vision, Trento, Italy Using Color and Local Binary Patterns for Texture Retrieval ABSTRACT Texture plays a crucial role to detect and recognize materials in real-world pictures. The choice of the visual descriptors that are the most appropriate for the detection and recognition tasks is generally a hard issue, due to the wide range of circumstances under which the imaged objects can appear. This work addresses the problem of the illuminant invariant texture retrieval, and it proposes two algorithms that combine illuminant invariant color information with the textural cues output by the local binary pattern (LBP) operator. The experiments, carried out on a public database, show that the joint use of color and texture features remarkably improves the retrieval performance of techniques based on LBP only. KEYWORDS Image Retrieval, Color, Texture, Local Binary Patterns CITATION: Lecca M. (2017) ‘Using Color and Local Binary Patterns for Texture Retrieval ’, Cultura e Scienza del Colore - Color Culture and Science Journal, 07, pp. 29-38, DOI: 10.23738/ccsj.i72017.03 Received 25 January 2016; Revised 08 May 2017; Accepted 14 May 2017 Michela Lecca is a permanent researcher of the Research Unit Technologies of Vision of Fondazione Bruno Kessler (Trento, Italy). Her research interests include color image processing, object recognition, image retrieval and labeling, and low-level image processing for embedded vision systems. She is a member of the International Association for Pattern Recognition IAPR-GIRPR and of the Gruppo Italiano del Colore-Associazione Italiana Colore. 30 Cultura e Scienza del Colore - Color Culture and Science | 07 | 2017 | 29 - 38 Lecca M. ISSN 2384-9568 DOI: 10.23738/ccsj.i72017.03 1. INTRODUCTION Image retrieval consists of detecting a visual correspondence between images. Precisely, given a database of known images, said references, and given a (possible) new, previously unseen image, said query, the problem is to find out from the database the reference that is the most similar to the query. The visual similarity is defined as a matching function (e.g. a distance) between some visual features, like color or edges distributions, that are relevant to the retrieval task. In general, the choice of the features and of the matching function, that allow the highest retrieval performance, is a hard issue [1]. Moreover, the visual features, as well as the matching function, are often required to be robust to possible image variations, such as changes of size, orientation, and light conditions. This work, that has been presented in [2] at the 10th Colour Conference, focuses on the retrieval of color pictures, where references and queries depict the same textured materials but acquired under different lights. Precisely, this paper reports two algorithms performing color- based illuminant invariant texture retrieval. Both these algorithms describe the visual appearance of an image by combining illuminant invariant color information with the textural cues output by the local binary pattern (LBP) operator [3], and perform image retrieval by a nearest neighbor classifier. In agreement with previous works [4] [5] [6] [7], this paper empirically shows that the addition of the color features to LBPs remarkably increases the retrieval performance of the techniques based on the LBP analysis only [3] [7]. The LBP operator detects image micro- structures (e.g. edges, lines, flat regions), that are represented by binary vectors, called LBP, usually normalized in order to be robust against in-plane rotation of the image [3]. By definition, the LBP operator is insensitive to monotonic changes of image intensity, while invariance to chromatic illuminant changes can be obtained by computing the LBPs separately on the three image channels [5], [7]. Thanks to their high discrimination capability and low computational complexity, the LBP operator has been successfully applied to a wide variety of computer vision tasks, like video background subtraction [8], face detection and identification [9], fingerprints recognition [10], image classification [11]. The main issue addressed here is how to add color information by preserving the illuminant invariance of the LBPs. In fact, color is one of the most important features for image description and retrieval [12], but its usage in practice is often limited due to its strong sensitivity to the light. The proposed algorithms, named GW- algorithm and vK-algorithm, cope with this problem in different ways. The GW-algorithm provides an illuminant invariant color and texture description as follows. First, it normalizes the colors of any image with the Gray-World algorithm; second, for each normalized color channel, computes the joint probability map of intensity and rotation invariant LBPs; finally, it concatenates the probability maps of the three color channels and performs retrieval by comparing them by means of a Lp metric [13]. The vK-algorithm computes the joint probability map as before, but without any color normalization, and implements a matching strategy that takes into account possible changes of colors due to illuminant variations, based on the von Kries model [14] [15] [16]. In both these algorithms, the color and texture probability map can be weighted by the Euclidean distance of each pixel from the image barycenter, in order to provide a global spatial description of the color distribution. The experiments carried out on the public real- world dataset [17] show that the proposed algorithms outperform the approaches based on color or LBPs individually, and, in general, better results are obtained by using the weighted probability map. 2. LOCAL BINARY PATTERNS: DEFINITION AND COMPUTATION Let be a pixel of a gray level image and let be a circular neighbor of where denotes the radius of the neighbor and is the number of sampling points with coordinates defined by where are the coordinates of . Due to the discrete nature of the data, when the coordinates of any sampling point do not fall at integer coordinates, the gray value is bilinearly interpolated. The LBP label at is defined as where indicates the gray value and is a thresholding function from to such that if and otherwise. The ordered sequence is said LBP code. Any image rotation changes the order of the LBP code entries. Invariance against rotations of an angle of (with an integer number) can be achieved by a circular bitwise cyclic shift (see Fig. 1 for an example). Experiments reported in [3] and [18] showed that 31 Cultura e Scienza del Colore - Color Culture and Science | 07 | 2017 | 29 - 38 Using Color and Local Binary Patterns for Texture Retrieval ISSN 2384-9568 the best texture description and classification is achieved by considering only a small subset of LBPs, called uniform LBPs. A LBP is said to be uniform if it contains at most two bitwise transitions (from 0 to 1 or vice-versa). The rotation invariant uniform LBP (denoted as LBPriu, where ri stands for rotation invariant and u for uniform) of a neighbor centered in is computed by the following equations: otherwise,with By the Equations (2.1) and (2.1b), there are exactly LBPrius: each uniform pattern is labeled by a number from 0 to , corresponding to the number of 1’s in its LBP code, while the non-uniform LPBs are grouped together under the label . Fig. 2 shows the nine LBPrius for = 1 and = 8. By definition, the s are invariant to monotonic changes and shifts of the gray level intensity of any image. In fact, for any real numbers a and b, with a>0 Therefore, the LBPs are robust to image noise modeled as an additive term, and to (2.1b) Figure 1 - Example of LBP computation on a 3x3 neighborhood ( ). The central gray value is highlighted in red. The LBP codes and their corresponding labels with and without rotation normalization are shown in the blue and green boxes respectively. any changes of image intensity, caused for instance by shadows or by varying the distance of the camera from the light sources. As a consequence, the s of the red, green, and blue channels of a color image provide an (2.2) illuminant invariant description of the colored texture. In fact, the change of image colors due to an illuminant change is well approximated by the von Kries diagonal map between the color responses captured under the varied lights [14], [16], i.e. where the s are real strictly positive numbers, called von Kries coefficients, and is the intensity value of the -th color channel ( = 0, 1, 2, i.e. red, green, blue) of the RGB input image. Since the values of each color channel under the von Kries map satisfy the Equation (2.2) for = 0, the corresponding LBPs are illuminant invariant. Similarly, also the LBPs of the 1D chromaticity image defined by and otherwise, are insensitive to changes of illuminant. (2.3) (2.4) Figure 2 - The nine uniform LBPs ù ( = 1, = 8). Black and white circles represent the 0 and 1 8-bit values of the LBP operator. The number in blue denotes the unique code of each rotation invariant uniform LBP 3. VISUAL DESCRIPTION BY COLOR AND LBPs In many retrieval applications, the occurrences of the LBPrius of an image are encoded into a histogram with bins [19]. The LBPrius histogram has been proved to be an excellent feature for fast texture classification, also in comparison with other descriptors, as co- occurrence matrices, Gabor filters, wavelets, or Gaussian Markov random fields [20] [21]. (2.1a) whileif 32 Cultura e Scienza del Colore - Color Culture and Science | 07 | 2017 | 29 - 38 Lecca M. ISSN 2384-9568 DOI: 10.23738/ccsj.i72017.03 This work shows that the addition of some information about the distribution of the image color to the local description of the image texture provided by the LBPrius histogram further increases the retrieval performance obtained by using the LBPrius histograms only. While LBPs are invariant to changes of illuminant, the color is strongly depends on the light: the two algorithms described next propose two different ways to use color by preserving illuminant invariance. 3.1. THE GW-ALGORITHM This algorithm is named GW-algorithm because it achieves invariance to changes of illuminant by normalizing the colors of any textured image by the well-known Gray-World algorithm. The Gray-World algorithm maps the color response at an image pixel on the triplet , with where is the mean value of the intensity of the -th channel. This means the mean RGB value of the new image obtained by the transformation in Equation (3.1) is a gray color with intensity 128. This Gray-World normalized image is invariant with respect to illuminant changes. In fact, let and be the RGB values at under two illuminants. According to the von Kries model in Equation (2.3) and to Equation (3.1), The illuminant invariant feature used here integrates the color information encoded in the Gray-World normalized image with the LBPrius by computing for each chromatic channel of the joint probability of chromatic intensity and texture, i.e. where indicates the cardinality of the subsequent set, and vary respectively over and . The three histograms defined by Equation (3.2) are then concatenated into a 2D probability map, that is encoded as a matrix (see Fig. 3 for an example). The probability map can be also weighted by the Euclidean distance of the pixels from the barycenter of the image support (i.e. from the barycenter of the set of pixels composing the image), in order to provide a global spatial description of the image color distribution. In particular, if is this barycenter, the weighted (3.2) version of Equation (3.2) is with Gray-World color normalization can be replaced with other techniques, that remove illuminant casts, as for instance [22] or [23]. The Gray- World normalization has been chosen here among other color normalization approaches, due to its low computational charge. The histograms of two textured images, possibly related by an illuminant changes, are then matched by comparing their distance, that is a metric usually defined over a Lebesgue integrable function space [13]. Precisely, given two integrable (real-valued or complex) functions and defined over a domain , the distance between and is where . The equation of the distance can be easily re-formulated for discrete functions, by replacing the integral with a sum, so that the distance can be computed also in case of discrete data, as histograms or images. When p = 2, and and are encoded as vectors with finite dimension, the distance is the Euclidean distance. 3.2. THE vK-ALGORITHM This algorithm is named vK-algorithm because it relies on the von Kries model [14] [16]. The vK-Algorithm describes the image appearance by the joint probability of color and texture, without any color normalization. The probability map is computed as before by splitting the input RGB images into its three channels and by computing the histograms of color and LBP as in the Equations (3.2) or (3.3) where and are replaced by and , respectively. According to the von Kries model, the probability maps and of each color channel of two images and of the same scene imaged under two different illuminants are stretched to each other by the von Kries coefficients along the horizontal axis, i.e. for any (see Fig. 3). Given a query image and a reference , the vK- Algorithm matches and as follows: a. the coefficients s are estimated by the method proposed in [15]: this minimizes the parametric Earth Mover Distance between the RGB color histograms of and , or equivalently, (3.3) (3.1) 33 Cultura e Scienza del Colore - Color Culture and Science | 07 | 2017 | 29 - 38 Using Color and Local Binary Patterns for Texture Retrieval ISSN 2384-9568 between the horizontal projections of and . Formally, let and be the histograms of the -th color channel of and respectively. By the von Kries model Thus, the method in [15] computes the set and estimates by a least square approach that finds the best line fitting the points of . The similarity between and is measured by the Earth Mover Distance (EMD) between and . This distance will be referred hereafter as parametric EMD, because of its dependency on the coefficient (see [15] and [16] for more details); b. the probability is stretched accordingly, i.e. ; c. finally and are compared by a distance. If and are actually related by an illuminant change, then the distance between and is zero. 4. EXPERIMENTS The performance of the GW- and vK- retrieval algorithms has been measured on the public real-world image dataset Outex_TC_00014 [17], that here will be indicates as Outex for shortness. Outex consists of 68 different classes of colored textures, each represented by 20 images and captured under three different lights (a 2856K incandescent CIE A light denoted as “inca”, a Figure 3 - A texture from Outex imaged under the three lights inca, tl84, horizon (see Section 4 for more details) with their gray -world corrected images and the color & tex ture probabilities implemented by GW- and vK-algorithms without weights and with ( = 1, = 8). 2300K horizon sunlight denoted as “horizon”, and a 4000K fluorescent light denoted as “tl84”). Fig. 3 and Fig. 4 show some examples. As already mentioned in Section 1, given a query and a set of references, the image retrieval problem consists of finding the reference, which is the most similar to the query with respect to the used features. In our framework, the query is correctly retrieved if its class matches the class of the selected reference. The experiments have been carried out by setting as references the images captured under an illuminant ( = inca, horizon, tl84), and as query each image taken under another illuminant ( = inca, horizon, tl84), and . The retrieval accuracy has been measured by the average match percentile (AMP) and by the recognition rate (RR). The AMP is defined as where is the number of references, and indicates the position of the first correct response in the sorted list of the references output by the classifier, while RR is the percentage of texture images correctly classified (i.e. ). The performance of the GW- and vK- algorithms has been compared with that output by other approaches that use exclusively texture or color information. In particular, the following descriptors have been considered: 1. the histograms of the LBPrius computed over the RGB color channels; 2. the histograms of the LBPrius computed over the 1D chromaticity image; 3.the histograms of the RGB colors of the Gray-World normalized images; 34 Cultura e Scienza del Colore - Color Culture and Science | 07 | 2017 | 29 - 38 Lecca M. ISSN 2384-9568 DOI: 10.23738/ccsj.i72017.03 4. the histograms of the RGB colors of the non normalized images. All the histograms listed before have been computed with and without Euclidean distance weights. The algorithms using the features at points 1, 2, 3 perform image retrieval by a nearest neighbor classifier with the distance ( = 1, 2) as similarity measure. In addition, for the feature at point 1, the log-likelihood statistic used in [3] has been considered: where denotes a bin of the -th histograms and . The color histograms at point 4 have been matched by using the Earth Mover Distance as proposed in [15], in order to ensure invariance against illuminant changes. Fig. 5 shows an image, its Gray-World normalization and its 1D chromaticity image, along with the descriptors at points 1, 2, 3, 4. The corresponding joint probability maps of color and textures used by GW- and vK-algorithm are shown in Fig. 6. Tab. 1 summarizes the descriptors used in the experiments along with their acronyms. Tab. 2 reports the mean values of AMP and RR averaged over the number of database images and of the pairs . Tab. 2 also shows the rounded up mean value of the rank . The value of controls the size of the neighbor of each pixel and thus the scale at which the texture is described. Neighbors of different size capture different visual cues of the texture, and some of them may even provide poor or noised information. The value of is related to the robustness of the descriptors against in plane-rotation. The choice of the pair providing the best description in terms of retrieval performance is not addressed here, but the experiments have been carried out by considering two different values of : (1, 8) and (2, 16). All the retrieval algorithms achieve the best performance for = (2, 16), and with the descriptors weighted by the Euclidean distance from the image barycenter. Apart from the case of the LBPrius histograms of the RGB color channels (point 1 in Section 4), the retrieval performance does not change by using the distances or as similarity measure. Therefore, for the other features, the results obtained with as retrieval distance are omitted in Tab. 2. The worst results are obtained by using the LBPrius histograms with , and the log-likelihood statistic as retrieval distance (AMP = 0.8997 and RR = 10.61 %). The performance significantly increases for and . Also the use of the LBPrius histograms of the 1D chromaticity image does not provide good results, because the color information encoded in is too coarse. The GW- and vK- algorithms provide the best performance. In particular, the best results are obtained by GW-algorithm with the weighted color and texture probability. In this case, AMP = 0.9878 and on average the correct image is at the 18th place of the ranked list of references. By discarding the Euclidean weights, RR (AMP, resp.) is slightly higher (lower), than the RR (AMP, resp.) achieved in the best case. However, these differences are negligible. The vK-algorithm outputs similar values of AMP and , while RR is remarkably higher than that obtained by the GW-algorithm: the 47% against the 37%. This means that the number of images correctly classified (i.e. = 1) is higher for vK-algorithm than for the GW-algorithm, but the rank of the images not correctly classified is lower for vK-algorithm than for GW-Algorithm. Therefore, on average, the AMP is more or less the same for both the algorithms. 5. CONCLUSIONS According to previous studies, this work has shown empirically that the joint use of color and texture improves the performance of retrieval algorithms employing color or texture only. The retrieval algorithms presented here ensure invariance against variations of illuminant and in-plane rotation of the image. Future work will include a multi-scale image analysis in order to achieve robustness against size changes, and the usage of the proposed algorithms for object detection and semantic image labeling. Figure 4 - Some examples of colored textures from Outex). 35 Cultura e Scienza del Colore - Color Culture and Science | 07 | 2017 | 29 - 38 Using Color and Local Binary Patterns for Texture Retrieval ISSN 2384-9568 Acronym Descriptor LBP (R, P) Histograms of the LBPrius with radius R and P sampling points computed over the three color channels. LBP (R, P), L2 Histograms of the LBPrius with radius R and P sampling points computed over the three color channels, weighted by the Euclidean distance. LBPC(R, P) Histograms of the LBPrius with radius R and P sampling points computed on the 1D chromaticity image. LBPC(R, P), L2 Histograms of the LBPrius with radius R and P sampling points computed on the 1D chromaticity image, weighted by the Euclidean distance. GW-Colors Histograms of the Gray-World normalized color channels. GW-Colors, L2 Histograms of the Gray-World normalized color channels, weighted by the Euclidean distance. GW-Colors + LBP(R, P) Color & texture joint probability of Gray-World normalized images [GW-algorithm] GW-Colors + LBP(R, P), L2 Color & texture joint probability of Gray-World normalized images, weighted by the Euclidean distance [GW-algorithm]. Colors Histograms of RGB color channels. Colors, L2 Histograms of RGB color channels, weighted by the Euclidean distance. Colors + LBP(R, P) Joint Probability of color and texture [vK-Algorithm]. Colors + LBP(R, P), L2 Joint Probability of color and texture, weighted by the Euclidean distance [vK-Algorithm]. Table 1: Acronyms of the retrieval algorithms. Descriptor Retrieval Distance AMP RR (%) MEAN RANK [on 1360 img] LBP (1, 8) Log-Likelihood [3] 0.89971 10.613 137 LBP(2, 16) Log-Likelihood [3] 0.93845 18.125 85 LBP(1, 8) L2 0.90558 11.067 129 LBP(2, 16) L2 0.94323 18.787 78 LBP(1, 8) L1 0.93327 23.946 92 LBP(2, 16) L1 0.96827 36.078 44 LBP(1, 8), L2 L1 0.93492 23.836 89 LBP(2, 16), L2 L1 0.96968 35.343 42 LBPC(1, 8) L1 0.88932 11.618 151 LBPC(2, 16) L1 0.90535 18.027 130 LBPC(1, 8), L2 L1 0.89389 10.637 145 LBPC(2, 16), L2 L1 0.90871 16.544 125 GW-Colors L1 0.97943 21.446 29 GW-Colors, L2 L1 0.97939 21.336 29 GW-Colors + LBP(1, 8) L1 0.98433 31.716 22 GW-Colors + LBP(2, 16) L1 0.98766 37.378 18 GW-Colors + LBP(1, 8), L2 L1 0.98416 31.017 23 GW-Colors + LBP(2, 16), L2 L1 0.98779 36.863 18 Colors Parametric EMD 0.95845 36.569 57 Colors, L2 Parametric EMD 0.96118 36.703 54 Colors + LBP(1, 8) Parametric EMD, L1 0.98256 42.451 25 Colors + LBP(2, 16) Parametric EMD, L1 0.98633 47.745 19 Colors + LBP(1, 8), L2 Parametric EMD, L1 0.98310 42.096 23 Colors + LBP(2, 16), L2 Parametric EMD, L1 0.98685 47.708 18 Table 2: Retrieval Results (see Tab. 1 for the acronyms of used descriptors). In the last four descriptors, parametric EMD has been used for computing the von Kries coefficients and L1 to match the color corrected probability maps. FUNDING This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. CONFLICT OF INTEREST The author declares no conflict of interest including financial, personal or other relationship with other people and organizations within three years of beginning the submitted work that could inappropriately influence, or be perceived to influence, this work 36 Cultura e Scienza del Colore - Color Culture and Science | 07 | 2017 | 29 - 38 Lecca M. ISSN 2384-9568 DOI: 10.23738/ccsj.i72017.03 Figure 5 - An image, its gray-world normalization, its 1D chromaticity image and the color and texture descriptors listed in Section 4, points 1, 2, 3, 4. The x-axis reports the intensity bins. 37 Cultura e Scienza del Colore - Color Culture and Science | 07 | 2017 | 29 - 38 Using Color and Local Binary Patterns for Texture Retrieval ISSN 2384-9568 BIBLIOGRAPHY [1] R. Veltkamp, H. Burkhardt and H.-P. Kriegel, (eds), State-of-the-art in content-based image and video retrieval, Springer Science & Business Media, 2013. [2] M. Lecca, “Color improves Texture Retreval,” in Proc. of X Colour Conference, Genova, Italy, 2014. [3] M. Pietikäinen, T. Ojala and T. Mäenpää, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 24, no. 7, 2002. [4] J. Y. Choi, K. Plataniotis and Y.-M. Ro, “Using colour local binary pattern features for face recognition,” in Proc. of 17th IEEE International Conference on Image Processing, 2010. [5] G. Anbarjafari, “Face recognition using color local binary pattern from mutually independent color channels,” EURASIP Journal on Image and Video Processing, vol. 1, no. 6, 2013. [6] S. Banerji, A. Verma and C. Liu, “LBP and Color Descriptors for Image Classification,” in Cross Disciplinary Biometric Systems, Springer, 2012, pp. 205-225. [7] C. Zhu, C.-E. Bichot and I. Chen, “Multi-scale Color Local Binary Patterns for Visual Object Classes Recognition,” in Proc. of 20th International Conference on Image Processing, 2010. [8] G. Xue, J. Sun and L. Song, “Dynamic background subtraction based on spatial extended center-symmetric local binary pattern,” in IEEE International Conference on Multimedia and Expo, 2010. [9] D. Huang, S. Caifeng, M. Ardabilian, Y. Wang and C. Liming, “Local Binary Patterns and Its Application to Facial Image Analysis: A Survey,” Systems, Man, and Cybernetics, Part C: Applications and Reviews, vol. 41, no. 6, pp. pp.765-781, Nov. 2011. [10] S. Malathi and C. Meena, “An efficient method for partial fingerprint recognition based on local binary pattern,” in IEEE International Conference on Communication Control and Computing Technologies, 2010. [11] L. Nanni, A. Lumini and S. Brahnam, “Survey on LBP based texture descriptors for image classification,” Expert Systems with Applications, vol. 39, no. 32, 2012. [12] G. Schaefer, “Colour for Image Retrieval and Image Browsing,” in Proc. of ELMAR, 2011. [13] E. Schechter, Handbook of Analysis and its Foundations, London: Academic Press Inc, 1997. [14] G. Finlayson, M. Drew and B. Funt, “Diagonal transforms suffice for color constancy,” in Proc. of Fourth International Conference on Computer Vision, 1993. [15] M. Lecca and S. Messelodi, “Illuminant Change Estimation via Minimization of Color Histogram Divergence,” in Computational Color Imaging Workshop , 2009. [16] M. Lecca, “On the von Kries Model: Estimation, Dependence on Light and Device, and Applications,” in Advances in Low-Level Color Image Processing, Springer, 2014, pp. 95-135. [17] T. Ojala, T. Mäenpää, M. Pietikäinen, H. Viertola, J. Kyllönen and S. Huovinen, “Outex - New framework for empirical evaluation of texture analysis algorithms,” in Proc. of 16th International Conference on Image Processing, 2002. [18] D. Harwood, T. Ojala, M. Pietikäinen, S. Kelman and L. Davis, “Texture classification by center-symmetric auto- correlation, using Kullback discrimination of distributions,” Pattern Recognition Letters, vol. 16, no. 1, pp. 1-10, 1995. [19] M. Pietikäinen, T. Ojala and Z. Xu, “Rotation-invariant texture classification using feature distributions,” Pattern Recognition, vol. 33, pp. 43-52, 2000. [20] D. S. Larry, J. A. Steven and J. K. Aggarwal, “Texture analysis using generalized co-occurrence matrices,” Pattern Analysis and Machine Intelligence, IEEE Trans. on, vol. 3, pp. 251-259., 1979. [21] R. Porter and N. Canagarajah, “Robust rotation- invariant texture classification: wavelet, Gabor filter and GMRF based schemes,” in Vision, Image and Signal Figure 6 - Color and Texture Descriptors (with and without Euclidean weights) used by vK- and GW- algorithms. These features refer to the image in Fig. 5. The black bands visible in the probability maps of GW-algorithm are due to the discrete nature of the v isual data (the v alue in Equation (3.1) is cast to an integer). 38 Cultura e Scienza del Colore - Color Culture and Science | 07 | 2017 | 29 - 38 Lecca M. ISSN 2384-9568 DOI: 10.23738/ccsj.i72017.03 Processing, IEE Proceedings, 1997. [22] E. Provenzi, M. Fierro, A. Rizzi, L. De Carli and D. Marini, “Random Spray Retinex: A New Retinex Implementation to Investigate the Local Properties of the Model,” IEEE Transactions on Image Processing, vol. 16, no. 1, pp. 162-171, 2007. [23] A. Rizzi, C. Gatta and D. Marini, “From Retinex to Automatic Color Equalization: issues in developing a new algorithm for unsupervised color equalization,” J. Electronic Imaging, vol. 13, no. 1, pp. 75-84, 2004.