910Teon6.pdf INTERNATIONAL JOURNAL OF COMPUTERS COMMUNICATIONS & CONTROL Special Issue on Fuzzy Sets and Applications (Celebration of the 50th Anniversary of Fuzzy Sets) ISSN 1841-9836, 10(6):865-872, December, 2015. A Retrospective Assessment of Fuzzy Logic Applications in Voice Communications and Speech Analytics H.-N.L. Teodorescu Horia-Nicolai L. Teodorescu 1. Romanian Academy - Iasi Branch Romania, Iasi, Carol I, 8 2. Gheorghe Asachi Technical University of Iasi Romania, Iasi, Str. D. Mangeron, 67 hteodor@etti.tuiasi.ro Abstract: Voice and speech communication is a major topic covering simultaneously ’communication’, ’control’ (because it often involves control in the coding algorithms), and ’computing’ - from speech analysis and recognition, to speech analytics and to speech coding over communication channels. While fuzzy logic was specifically con- ceived to deal with language and reasoning, it has yet a limited use in the referred field. We discuss some of the main current applications from the perspective of half a century since fuzzy logic inception. Keywords: fuzzy logic, fuzzy system, speech, communication, VAD, speech segmen- tation, speech coding, speech analytics. 1 Introduction At 50 years since the advent of fuzzy logic, 40 years since by Lotfi A. Zadeh introduced the concept of linguistic variable [50] and at more than 60 years since the mathematician Grigore C. Moisil argued that a new logic must be invented for describing human language and reasoning, it is compelling to ask: How much fuzzy logic did contributed to our understanding and technical use of language and speech in communications? As fuzzy logic (FL) was specifically conceived to model the human language and logic vagueness, one could expect that it played a central part in improving voice communications and speech recognition. Yet, the current situation does not seem to fully support this expectation – at least not at the level FL gained popularity in (fuzzy) control theory. In the review [44], not a single mention to fuzzy logic is made in connection to voice communication, showing no penetration in the mainstream of communication applications after almost 30 years since the first paper on FL. The situation has not much improved. The recent review [11] deplores the fact that there are very few if any papers using FL in the major conferences devoted to speech. Major journals publish only seldom papers on FL in speech applications and voice communications. As a matter of example, there is a single paper in this Journal to refer to the control and optimization of voice communications, rather indirectly, namely [16]. That paper does not use FL-based methods. There is also another paper referring to fuzzy control for data and indirectly, but not directly to voice communication, [48]. Sparsely, there are however papers on the topic in major journals, see for example the approach in [14] on a fuzzy traffic controller for ATM networks. An hypothesis for explaining this astonishing, general situation of the low number of papers on voice communication and speech is that something is still missing to allow for the expected eruption of FL applications in the field. We critically review the state of the art and search for answers for the current state of affairs. In this paper we assess the use of FL in four narrow sub-domains: voice activity detection (VAD), speech segmentation, and speech coding, which are strongly related, on one side, respectively speech analytics, on the other side. Copyright © 2006-2015 by CCC Publications 866 H.-N.L. Teodorescu 2 Some Applications of FL to Voice Communications and Speech Coding 2.1 Fuzzy VADs and FL in Speech Segmentation Voice communication in telephony systems and over Internet is done in digital form, by packets of data with speech coded using PCM or other coding techniques. It is a matter of minimizing the transmission effort and thus optimizing transmission capacity and minimizing energy to send only useful packets. Speech is full of pauses that may include ambient noise. Sending over the networks pause (noise) packets is useless. Therefore, detection of speech and noise (pause) segments, and next coding and sending only the speech segments may significantly improve communication efficiency and channel useful capacity. The so-called voice-activity de- tectors (VADs) are meant to separate useful and useless segments before transmission and are included in virtually all communication equipment. The main difficulty in building high quality VADs is to differentiate between consonants and noise, because some of the consonants, espe- cially the fricative ones are noise-like. Both the frequency spectra and the amplitude of the fricatives as /s/, /f/ are close to white noise of low amplitude, as one encounters in offices. That makes difficult the task of the voice activity detectors. Compounded with that is the variability of the noise, which is typically nonstationary and may be white, pink, impulsive or a mixture of them, with variable amplitudes. Discerning between noise and unvoiced consonants is a mat- ter of classification, possibly solved with fuzzy voice activity detection (FVAD) algorithms as in [3], [4], [5], [6], [7], [8], [12]. The VA detection needs several preliminary stages. In one approach, one detects and sepa- rates the periodical and a-periodical segments (PAP analysis) in the speech [3], [4], [5] using a linear predictor (LP). LPs approximate the sampled speech signal sn as a linear combinations of the previous samples, according to san = ∑M k=0 aksn−k + ∑Q j=1 bjs a n−j, where s a denotes the approximated samples and ak, bj are the LP coefficients. LPs are able to model well periodic sig- nals, with low error en = sn−san, while they are inefficient for a-periodic signals (large prediction errors). Signal parameters as the energy and the number of zero-crossings (NZC) are also used for supplementing the LP PAP analysis, where NZC is computed as the number of times, in a specified length segment of signal (signal window), successive samples satisfy sn−1sn ≤ 0. Alter- natively, one may use only amplitude, spectral properties such as the ratio of powers in the low and high frequency bands and NZC, possibly supplemented with the values of the self-correlation function, or properties of the cepstrum or of the Mel-spectrum (Mel-Frequency Cepstral Coeffi- cients - MFCC) etc., to discriminate between voiced, unvoiced and noise segments. In VAD, as well as in speech segmentation and speech and emotion recognition, the decision is made based on the original parameters, such as LPC coefficients, energy, and NZC, or based on a set of derived, fused parameters – the representation space. In the second case, several parameters in the primary parameter space are processed together and a new representation (representation space) is derived, for example, the coherence between the periodic part of the signal and the noise (remaining part), as in [3], or the fuzzy information space representation as in [42]. VADs are included in communication standards, but none of the standards refers to FL and FVADs. Yet, [3] found that employing a decision based on FL rules applied to the ’coherence measure between the noisy speech and its prediction residue’, the performances of the FVAD performs ’globally better than G.729B and presents moderate improvement when compared to UMTS 3G TS 26.094 VAD.’ They used the coherence function computed on every frame k, C(f, k), C2(f, k) = S 2 sn (f,k) Ss(f,k)×Sn(f,k) , where f is the frequency, S denotes spectra in the frame for the noise n and signal s, and Ssn stands for the inter-signal spectral density [3], [4]. After defining a set of frequency bands, Bi, the coherence function on each band is computed as A Retrospective Assessment of Fuzzy Logic Applications in Voice Communications and Speech Analytics 867 CBi (k) = ∑ f∈Bi |C(f, k)| and these values are fuzzified according to three membership functions [3]. A fuzzy decision is optimized for determining the type of signal segment and thus the VA. Beretelli et al. [6], [7], [8] tested another approach, using the same parameters employed by the ITU-T G.729 VAD standard, namely the energy differences between successive speech frames, for full-band ∆Et and low-frequency band ∆EL, the difference of the NZCs, ∆ZC, and the spectral distortion ∆S between successive frames. In their VAD algorithm, the decision is made based on a set of simple fuzzy rules given in [7], such as ’IF (∆S is medium or low ) THEN (voice is active)’ and ’IF (∆EL is low) AND (∆S is very low) AND (∆ZC is high) THEN (voice is active)’ (rules form [12]). Further improving the system, these authors considered multi-channel (two or several microphone) systems and took into account the delays between the signals. Using the output of the basic FVAD and the delays as inputs to a fuzzy network, after training the complex FVAD, and thus obtained better performing VADs than the simpler FVAD and than the G.729 standard VAD. Further refinements to increase the robustness in noise are given in [6], [7], [8], [12]. These authors report in [9] an improvement, compared with the VAD G.729, of more than 80% improvement in false activity detection. A close topic is that of acoustic event detectors; we notice the interesting approach in [45], where information fusion for classification of non-speech sounds is performed by a skilled use of fuzzy integrals. Similarly, FL-based techniques applied to speech segmentation have been proposed by many authors, but the penetration of these techniques in the mainstream of speech segmentation is still limited. Speech segmentation may regard several levels, from voice activity to vowel (voiced sound)-consonant phoneme boundaries, to phonemic and syllabic unit segmen- tation, under various conditions of noise. Lin et al. [35] improved the noisy speech segmentation using neural fuzzy networks based on so-called ’adaptive time-frequency (ATF) and refined time- frequency (RTF) parameters’. Hsieh et al. [24] presented a neuro-fuzzy segmentation method specific for the Mandarin language, while [47] combined a context-dependent phonetic HMM rec- ognizer with a fuzzy logic post-correction system that takes into account the conditions specific for each phonetic boundary for improving the precision of phoneme boundary determination. They report remarkable improvements, from errors of 400% for a basic HMM segmenter, com- pared to the durations determined by human operators, to a few percents after the corrections made by the fuzzy rules block. There are three main classes of techniques for speech coding [44]: the waveform (direct) coding, coding in the model space, typically named parametric coding, and hybrid coding that is mixing the first two techniques. Speech coding is based on a compromise between speech perceived quality and the used bandwidth, and thus cost. The best quality is obtained by waveform coding methods, which are also the most costly in terms of transmitted bandwidth. Parametric (model) coding achieves low bandwidths, but the quality is poor-to-good at best. Today speech coding, as in MPEG and telephony, is based on detailed psychoacoustic models derived from CELP. In brief, the low delay (LD) CELP coder standardized by CCITT as G.728 uses a 50th order linear predictor (LP) excited by (i.e, having as input) predefined signals. The set of excitation signals is predefined and indexed on a ’codebook’ (memory). After the LPC coefficients are determined on a speech frame, one searches the type of excitation and the best value of its amplitude (codebook gain) that produces at the LPC synthesizer output a signal that is the closest to the original speech frame (minimal error). The ’codebook’ waveform index and the code of the best matching amplitude, together with the LPC coefficients code the speech frame. A perceptual filter is also used to improve the perceptual quality of the decoded speech signal. While the LP is computed with performant algorithms, the best choice of excitation and of its gain as available in the codebooks are time consuming. Sheikhan et al. [43] proposed a fuzzy adaptive resonance theory mapping (ARTMAP) for achieving fast codebook index selection. However, these authors have not justified their choice (the ARTMAP) in terms of the required 868 H.-N.L. Teodorescu computation power and time (complexity) and one could suspect that a simpler NN could have performed more efficiently in this application. 2.2 FL in Speech Analytics - A Surprising Low Development Already in 2006 a Gartner report [18] found that audio search and speech analytics is one of the new technologies companies are adopting. Beyond marketing and services, speech analytics are used in various applications as security [49], learning and teaching [19]. Carlsson [10] argues that analytics and FL could be profitably combined in management. There are multiple reasons to believe that FL may play an essential role both in interpreting the text and uncovering emotional states in speech; see for example the comparison of methods for emotion detection in [1], the example of method in [2] and the recent excellent paper [32]. However, many approaches applying FL to emotional speech are somewhat mechanistic, with no direct relevance for psychological, neurologic, and phonetic processes. There is virtually no FL or analytics-related study on the influence of the emotions on the articulatory processes (changes in vocal fold vibration and non-vocal fold vibration frequencies, degree of creakiness, changes in the articulation place and other elements of interest in articulatory phonetics). There are exceptions from the mechanistic approach to assessing the speaker state; such exceptions deserve recognition, e.g., [21], [22], [23], who study correlations between qualitative representations of emotions such as valence, activation, and dominance and the acousto-physical parameters (acoustic features). On the other hand, one has to recognize the market value of the mechanistic approaches in analytics and other applications: they aim to produce real-life applications such as synthesizing emotional speech for the games and movies industry [40], monitoring the state of drivers [26], [27], [28], call center control and crowd/social state monitoring and control. There was little research on differentiating simulated emotions with variable degrees of likeness to the true ones. Some studies addressed simulated emotions by non-actors, aiming to voice communication enhancement and education; e.g., [19], [38] found significant differences in emotion detection when comparing acted emotions by layman (corpus described in [19]) and actors. Genuine emotions, as studied in several other researches, were found more challenging to determine than the acted ones. We expect that FL can help represent the degree of likeness by actors and laymen, moreover help build emotion simulation detectors. Because FL found reputed applications in classification, e.g., the kNN method and neuro- fuzzy classifiers [51] moreover because the analytics extensively use concepts easier interpretable by people than by current machines, one could expect that FL is largely present in analytics modules, including speech analytics. While this is not true, fuzzy ontologies are quite popular and at least some analytics have FL-based modules, see SAP-Hana [?] which considers fuzzy search as one of the ’few important techniques being used in Text Analysis’; namely fuzzy search stands for ’finding strings that match a pattern approximately’. Although this use relates to language, the technique simply applies FL in defining a fuzzy distance over the set of strings. The search is based on a minimal matching value and the respective command is like CONTAINS (<string-tolook-for>, FUZZY (0.x)), with 0.x the minimal accepted similarity [53]. Note that this analytics provides the function SCORE() that determines the degree of similarity for every string in a specified set and the given string, but this is simplistic and far from what would may be expected as level of use of FL in understanding people’s language communication. An interesting direction was opened in [39], who proposed the use of prosodic features to prioritize the call servicing. While not using FL, the approach in [39] is an example of applications where FL looks promising for speech analytics. Similarly, there is an interesting paper, [13], thoroughly analyzing the possibilities of mining fuzzy association rules in texts; that track could be followed and applied on step further to finding fuzzy associations between textual information A Retrospective Assessment of Fuzzy Logic Applications in Voice Communications and Speech Analytics 869 and prosody and emotions in speech. Another remarkable approach to analytics based on FL but not related to speech is constituted by a series of papers [30], [31] that apply fuzzy data analysis and inductive fuzzy classification using a normalization of the likelihood ratio to metadata and for knowledge discovery. Surprisingly, there are few reports on research on FL applied to speech analytics related to emotions. Maybe this is due to the fact that sentiment analysis on texts have developed earlier and that it is considered sufficient for deriving the mood of the speaker. 3 Discussion and Conclusions While the contributions of FL to speech technology, specifically to VAD, speech segmenta- tion, and coding cannot be disregarded, these contributions seem to be less significant than one may expect from applying FL to speech. Few researches compare the results and the advan- tages or disadvantages of the FL approach to non-fuzzy approaches, or even try to justify the FL-based approach. While some good results obtained using FL-approaches are expected based on the known power of universal approximation (and thus nonlinear classification) of FLSs, the capabilities of others, including their generalization power are less clear. A more systematic research program for employing FL in speech analysis is needed to overcome the current limits. An explanation for this state of affairs could be that FL requires extensive computations, while systems as cellular phones and even PCs are restricted in computation power. However, the recent processors have tremendously increased in computation power, favoring a larger use of FL. Thus, one can look forward with the hope that FL will achieve more in this field, in the near future. Acknowledgments. This work was supported in part (section on ’Analytics’) by the SPS NATO Program under Grant G4877 /SfP 984877. Bibliography [1] Amir N., Kerret O., Karlinski D.(2001); Classifying emotions in speech: a comparison of methods, 7th EUROSPEECH Proc., Aalborg, 127-130. [2] Austermann, A., Esau, N., Kleinjohann, L., Kleinjohann, B. (2005); Fuzzy emotion recog- nition in natural speech dialogue, Robot and Human Interactive Communication, ROMAN 2005, IEEE Int. Workshop on, 13-15 Aug. 2005, 317-322. [3] Ben Jebara S., Ben Amor T. (2004); On improving voice activity detection by fuzzy logic rules: case of coherence based features, Proc. Signal Processing Conference, 2004, 12th European, 725 - 728. [4] Ben Jebara S. (2002); Coherence-based voice activity detector, IEE Electronic Lett., 38(22):1393-1397. [5] Ben Jebara S. (2008); Voice Activity Detection Using Periodioc/Aperiodic Coherence Fea- tures, Signal Processing Conference, 2008, 16th European, Lausanne, Switzerland, 1-5. [6] Beritelli F., Casale S., Cavallaro A. (1999); A multi-channel speech/silence detector based on time delay estimation and fuzzy classification, Proc. IEEE Int. Conf. ASSP, Phoenix, AZ, 15-19 Mar 1999, Vol. 1: 93-96. [7] Beritelli F., Casale S., Cavallaro A. (1998); A robust voice activity detector for wireless communications using soft computing. IEEE J. Selected Areas Comm, 16(9): 1818-1829. 870 H.-N.L. Teodorescu [8] Beritelli F., Casale S., G. Ruggeri, S. Serrano (2002); Performance evaluation and compar- ison of G.729-AMR-fuzzy voice activity detectors, IEEE Signal Process Lett, 9(3): 85-88. [9] Beritelli F., Casale S., Cavallaro A. (1998); Adaptive voice activity detection for wireless communications based on hybrid fuzzy learning, Global Telecommunications Conference, 1998. GLOBECOM 1998. The Bridge to Global Integration. IEEE, 3: 1729 - 1734. [10] Christer Carlsson (2013); On the Relevance of Fuzzy Sets in Analytics. In R. Seising, E. Tril- las, C. Moraga, S. Termini (Eds.), On Fuzziness, Studies in Fuzziness and Soft Computing, 298: 83-89. [11] Carvalho, J.P., Batista F., Coheur L. (2012); A Critical Survey on the use of Fuzzy Sets in Speech and Natural Language Process, Fuzzy Systems (FUZZ-IEEE), 2012 IEEE Interna- tional Conference on, 1-8. [12] Cavallaro A., Beritelli F., Casale S (1998), A Fuzzy Logic-Based Speech Detection Algorithm For Communications in Noisy Environments, Proc. 1998 IEEE Int. Conf. Acoustics, Speech and Signal Process, 1: 565-568. [13] Chen Y.-L., Weng C.-H. (2009); Mining fuzzy association rules from questionnaire data, Knowledge-Based Systems, 22: 46-56. [14] Cheng RG, Chang C.J. (1996); Design of a fuzzy traffic controller for ATM networks, IEEE- ACM Trans. Networking, 4(3):460-469. [15] Cowie, R., Douglas-Cowie, E., Tsapatsoulis, N.,Votsis, G. , Kollias, S., Fellenz, W., Tay- lor, J.G. (2001); Emotion recognition in human-computer interaction, IEEE Signal Process Magazine, 18(1): 32-80. [16] Dhavarudha E, Charoenlarpnopparut C, Runggeratigul S (2015); Traffic Control Based on Contention Resolution in Optical Burst, International Journal of Computers Communica- tions & Control, 10(1); 49-61. [17] El Ayadi M., Kamel M.S., Karray F. (2011); Survey on speech emotion recognition: Features, classification schemes, and databases, Pattern Recognition, 44(3): 572-587. [18] Fenn J. (2006); Survey Shows Adoption and Value of Emerging Technologies. Gartner Re- search, 23 March 2006, Number G00138453. [19] Feraru, S.M., Teodorescu, H.N., Zbancioc, M.D. (2010); SRoL - Web-based Resources for Languages and Language Technology e-Learning, International Journal of Computers Com- munications & Control, 5(3): 301-313. [20] Gharavian D., Sheikhan M., Nazerieh A., Garoucy S. (2012); Speech emotion recognition using FCBF feature selection method and GA-optimized fuzzy ARTMAP neural network. Neural Computing and Applications, 21(8): 2115-2126. [21] Grimm, M., Kroschel, K., Narayanan, S. (2007); Support Vector Regression for Automatic Recognition of Spontaneous Emotions in Speech, Proc. ICASSP 2007, Honolulu, HI, 4: 1085-1088. [22] Grimm, M., Kroschel, K., Mower, E., Narayanan, S. (2007); Primitives-based evaluation and estimation of emotions in speech, Speech Commun, 49(10-11): 787-800. A Retrospective Assessment of Fuzzy Logic Applications in Voice Communications and Speech Analytics 871 [23] Grimm M., Kroschel K. (2007); Rule-Based Emotion Classification Using Acoustic Features, Speech Communication; 49(10): 787-800. [24] Hsieh C.T., Su M.C., Lai E., Hsu C.H. (1999); A Segmentation Method for Continuous Speech Utilizing Hybrid Neuro-Fuzzy Network. J. Information Sci. & Engineering, 15, 615- 628. [25] Juang C.-F., Cheng C.-N., Chen T.M. (2009); Speech detection in noisy environments by wavelet energy-based recurrent neural fuzzy network. Expert Systems with Applications, 36(1):321-332. [26] Kamaruddin, N., Nanyang, Wahab, A (2010);, Driver behavior analysis through speech emotion understanding, IEEE Intell Vehicles Symp 2010, San Diego, CA, 238-243. DOI: 10.1109/IVS.2010.5548124 [27] Kamaruddin N., Wahab A., Quek C. (2012); Cultural dependency analysis for understanding speech emotion. Expert Systems with Applications, 39(5): 5115-5133. [28] Kamaruddin N., Wahab A. (2009); Features extraction for speech emotion. J. Computational Methods in Science and Engineering, 9(1- Suppl.): 11-12. [29] Kasabov, N., Iliev, G. (2000); Hybrid system for robust recognition of noisy speech based on evolving fuzzy neural networks and adaptive filtering, Proc. Int. Conf. IJCNN 2000, 24-27 Jul 2000, Como, Italy, 5: 91-96. DOI:10.1109/IJCNN.2000.861440 [30] Kaufmann M.A. (2008); Inductive Fuzzy Classification in Marketing Analytics (Fuzzy Man- agement Methods), Springer [Kindle Edition]. [31] Kaufmann M.A., E. Portmann, M. Fathi (2013); A Concept of Semantics Extraction from Web Data by Induction of Fuzzy Ontologies, 2013 IEEE Int. Conf. Electro-Information Tech EIT, 1-6. [32] Kazemzadeh A., Lee S, and Narayanan S (2013); Fuzzy Logic Models for the Meaning of Emotion Words, IEEE Computational intelligence magazine, 8(2): 34-49. [33] Lee C.M., Narayanan S.S. (2005); Toward detecting emotions in spoken dialogs, IEEE Trans Speech and Audio Process, 13(2): 293-303. [34] Lee CM, Narayanan S. (2003); Emotion recognition using a data-driven fuzzy inference system, Proc. EUROSPEECH, Geneva, 157-160. [35] Lin, C.T., Wu, R.C., Wu, G.D.(2002); Noisy Speech Segmentation-Enhancement with Multi- band Analysis and Neural Fuzzy Networks, Int J Pattern Recognition and AI, 16(7): 927-955. [36] Ndousse, T.D. (1994); Fuzzy neural control of voice cells in ATM networks, IEEE J. on Selected Areas in Communications, 12(9): 1488 - 1494. [37] Ndousse, T.D. (1998); Fuzzy expert systems in a TM networks, in Fusion of Neural Net- works, Fuzzy Systems and Genetic Algorithms: Industrial Applications, Lakhmi C. Jain, N.M. Martin (Eds.), CRC Press, Boca Raton, USA, 229-284. [38] Pavaloi, I., Rotaru F.(2011); A Study on Duration for Different Pronunciations in Emotional States, Proc. 3rd Int. Conf. EHB, Iasi, Romania. 872 H.-N.L. Teodorescu [39] T. Polzehl and F. Metze (2008); Using prosodic features to prioritize voice messages, Proc. Searching Spontaneous Conversational Speech Workshop SIGIR 2008, Singapore, July 2008, ACM. [40] Qin Y., Zhang X., Ying H. (2010); A HMM-based fuzzy affective model for emotional speech synthesis, 2nd Int. Conf. ICSPS, 3: 525-528. DOI: 10.1109/ICSPS.2010.5555658. [41] Ramirez J. et al. (2004); Efficient voice activity detection algorithms using long-term speech information, Speech Commun, 42: 271-287. [42] Rodriguez W., Teodorescu HN, Grigoras F., Kandel, A., Bunke, H.(2002); A fuzzy informa- tion space approach to speech signal non-linear analysis, Int. J. Intelligent Systems, 15(4): 343-363. [43] Sheikhan M, Garoucy S.(2010); Reducing the Codebook Search Time in G.728 Speech Coder Using Fuzzy ARTMAP Neural Networks, World Applied Sciences Journal, 8(10): 1260-1266. [44] Spanias A.S. (1994); Speech Coding: A Tutorial Review. Proc. of the IEEE, 82(10):1541 - 1582. [45] Temko A., Macho D., Nadeu C.(2008); Fuzzy integral based information fusion for classifi- cation of highly confusable non-speech sounds. Pattern Recognition, 41(5):1814-1823. [46] Tian Y., Wu J., Wang Z., Lu D. (2003); Fuzzy clustering and Bayesian information criterion based threshold estimation for robust voice activity detection. 2003 IEEE Int. Conf. ASSP - ICASSP’03, 1: 444-447. [47] Toledano D.T., RodrĂguez Crespo M. A. (1998); Escalada Sardina J. G. (1998); Trying to Mimic Human Segmentation of Speech using HMM and Fuzzy Logic Post-correction Rules, 3rd ESCA/COCOSDA Workshop (ETRW), Nov. 26-29, SSW3-1998, 207-212. [48] Zare, H. , Adibnia,F., Derhami, V. (2013) ; A Rate based Congestion Control Mechanism Using Fuzzy Controller in MANETs, International Journal of Computers Communications & Control, 8(3): 486-491. [49] Yang M., Kiang M., Ku Y., Chiu C., Li Y. (2011); Social Media Analytics for Radical Opin- ion Mining in Hate Group Web Forums, J. Homeland Security and Emergency Management, 8(1): 1547-7355. [50] Zadeh, L.A. (1975); Concept of a Linguistic Variable and Its Application to Approximate Reasoning. 1. Information Sciences, 8(3): 199-249. [51] Zbancioc M., Feraru M. (2012); The Analysis of the FCM and WKNN Algorithms Perfor- mance for the Emotional Corpus SROL, Advances Electrical Comput Engng, 12(3): 33-38, DOI: 10.4316/AECE.2012.03005. [52] Zhao H., Wang G, Xu C., Yu F. (2011); Voice activity detection method based on multival- ued coarse-graining Lempel-Ziv complexity. Comput. Sci. Inf. Syst., 8(3): 869-888. [53] http://saphanatutorial.com/sap-hana-fuzzy-search/