plane thermoelastic waves in infinite half-space caused decision making: applications in management and engineering vol. 1, issue 2, 2018, pp. 111-120 issn:2560-6018 eissn: 2620-0104 doi: https://doi.org/10.31181/dmame1802119p * corresponding author. e-mail addresses: ljubisa.preradovic@aggf.unibl.org(lj. preradović), sinapsavla@yahoo.com (v. đajić), gordana.jakovljevic@aggf.unibl.org (g. jakovljević) gender and age structure as risk factors of carotid artery stenosis and specific themes areas of cartography ljubiša preradović1*, vlado đajić2, gordana jakovljević1* 1 university of banja luka, faculty of architecture, civil engineering and geodesy, bosnia and herzegovina 2 university of banja luka, faculty of medicine, bosnia and herzegovina received: 6 march 2018 accepted: 2 september 2018 available online: 2 september 2018. original scientific paper abstract: the stroke prevention project was implemented in the period between 2012 –2017 in the republic of srpska when 38,863 patients of both genders were examined. each of the patients underwent an ultrasound examination of the blood vessels of the neck on both sides. all the examinations were standardized and carried out by specially trained researchers. the presentation of the research results included descriptive statistics and a certain statistical test, which showed a statistically significant difference in carotid artery stenosis in male and female patients. the geographic information system was used for mapping carotid artery stenosis with the aim of determining the susceptibility of the population of a particular area, city and/or municipality to this disease and predicting it. the created epidemiological patterns show correlation between age structure and a particular area. key words: carotid artery; gis; mapping; prevention; risk factors 1 introduction annually, about 4,5 million people die of a stroke, as one of the toughest and most common diseases of modern man. the stroke, regarding its consequences, is the first cause of disability of modern man and, therefore, its prevention is very important (primatesta et al., 2007). it requires detection of the people with stroke risk factors (high blood pressure, diabetes, heart disease, high blood lipids, overweight people, smokers, people with a family history of stroke and people exposed to stress), as well as detection of pathological changes in the blood vessels of the neck and the head, whose treatment can lead to stroke prevention (autret et al., 1987; hennerici et al., mailto:ljubisa.preradovic@aggf.unibl.org mailto:sinapsavla@yahoo.com mailto:gordana.jakovljevic@aggf.unibl.org preradovic et al./decis. mak. appl. manag. eng. 1 (2) (2018) 111-120 112 1987; o'holleran et al., 1987; norris et al., 1991; inzitari et al., 2000; thom et al., 2008; đajić et al., 2015). in the republic of srpska there is a great number of citizens with so-called stroke risk factors, who cannot afford an ultrasound examination. this project provides citizens with a free and fast ultrasound screening of the blood vessels in the neck and the head thus contributing to stroke prevention. the geographic information system (gis) enables identification of epidemiological connection patterns between the risk factors and a particular area. the aim of this research is to detect pathological changes in the blood vessels of the head and the neck in the people having stroke risk factors as well as to prevent a stroke in order to determine the asymptomatic carotid disease prevalence in general population on the basis of a random sample of patients who underwent an ultrasound examination of the blood vessels in the neck. therefore, the mapping is carried out of carotid artery stenosis by using the gis with the aim of determining the susceptibility of the population of a particular area to a given disease and of predicting it. 2 material and methods in the period between 2012-2017, 38,863 patients were examined, i.e. 24,411 (62,8%) females and 14,452 (37,2%) males. all the examinees who had asymptomatic stroke (mu) and transient ischemic attack (tia) were not included in the project. before the examination, each patient filled in the standardized questionnaire asking for the following information: gender, age, height, weight, education, personal and family anamnesis of previous mu or tia, heart disease, diabetes, hypertension, hyperlipidemia, smoking, alcoholism. after filling in the questionnaire each of them underwent an ultrasound examination of the blood vessels in the neck on both sides. all these examinations were standardized and carried out by specially trained researchers. the stroke prevention project on the territory of the republic of srpska is carried out with the aim of determining the prevalence of the asymptomatic carotid disease in a representative sample of citizens in the republic of srpska. according to the last census (published in 2017), 1,228,423 citizens live in the republic of srpska (popis bih, 2013), that is, 1,170,342 citizens (rezultati popisa u bih, 2013) (the difference in the number of citizens is due to different methodologies that were applied to conducting the census). the previous census was published in 1991, but, due to the war, there was a big migration of the population. this census could not be used for calculating the number of patients who needed to be examined in certain municipalities; however, the sample was formed on the basis of the list of voters. local media and family doctors were previously informed about the project, as well as the local population, through a campaign which consisted of flyers, billboards, posters, media appearances, and so on. each project participant was invited to come for an examination by a nurse or a family doctor, or he checked in at the local medical institution on his own. tabular presentation was carried out using descriptive statistics and the mannwhitney u test, by applying analytic-statistic tools of the spps (originally called: statistical package for the social sciences), version 20, while for conducting graphical presentation, the spss, version 20 and microsoft excel 2007 were used. creating thematic maps was done in the software arcmap 10.2. the statistical data, on the basis gender and age structure as risk factors of carotid artery stenosis and specific themes areas… 113 of which mapping was carried out, were prepared in microsoft excel 2007 (.csv format). 3 research results on the territory of the republic of srpska, starting from 2012, the stroke prevention project has been carried out, with 38,863 examined patients (table 1). table 1 examined patients in the period from 2012 – 2017 year of examination gender of examinee total male female 2012 2284 4095 6379 2013 2 743 4281 7024 2014 2416 4421 6837 2015 3466 5931 9397 2016 2283 3957 6240 2017 1260 1726 2986 total 14452 24411 38863 the degree of carotid artery stenosis (blockage) ranged from 0 to 100% (in patients of both genders). median (md) of stenosis for all patients is 17,00% (in female patients median is less by 5,00% as compared to male patients), table 2.the average carotid artery stenosis for all patients is 19,03% (but in female patients average stenosis is less by 3,95% as compared to male patients). table 2 degree of carotid artery stenosis fig. 1 shows a degree of the carotid artery stenosis according to the gender of the patient. by applying the mann-whitney u test, a statistically significant difference is calculated (z= -27,485, p = 0,000) between carotid artery stenosis in female patients (n = 24,411, md = 15,00) and male patients (n = 14,452, md = 20,00). the carotid artery stenosis which is less than 20%, and, therefore, does not require any treatment was found in 21,408 (55,1%) patients (14,631 or 59,9% of all female patients and 6,777 or 46,9% of all male patients). by observing the percentage of the carotid artery stenosis representation according to gender, one can notice a higher frequency of carotid artery stenosis in male patients (table 3), as follows:  stenosis ranging from 20 49%: 47,5% in male patients and 37,3% in female patients,  stenosis ranging from 50 69%: 4,1% in male patients and 2,1% in female patients,  stenosis ranging from 70 99%: 1,1% in male patients and 0,5% in female patients, and, gender of examinee n minimum maximum median mean std. dev. male 14452 0 100 20.00 21.51 15.008 female 24411 0 100 15.00 17.56 12.556 total 38863 0 100 17.00 19.03 13.654 preradovic et al./decis. mak. appl. manag. eng. 1 (2) (2018) 111-120 114  stenosis of 100%: 0,4% in male patients and 0,2% in female patients. gender and age structure as risk factors of carotid artery stenosis and specific themes areas… 115 fig.1 degree of carotid artery stenosis according to the patient's gender table 3 degree of carotid artery stenosis /groups/ according to the patient’s gender carotid artery stenosis (%) gender of patient total male female 0-19 n 6777 14631 21408 % 46.9% 59.9% 55.1% 20-49 n 6870 9109 15979 % 47.5% 37.3% 41.1% 50-69 n 588 519 1107 % 4.1% 2.1% 2.8% 70-99 n 157 113 270 % 1.1% 0.5% 0.7% 100 n 60 39 99 % 0.4% 0.2% 0.3% total n 14452 24411 38863 % 37.2% 62.8% 100.0% percentage of the presence of carotid artery stenosis /group/ according to the patients’ gender is shown in fig. 2. preradovic et al./decis. mak. appl. manag. eng. 1 (2) (2018) 111-120 116 fig. 2 degree of stenosis of carotid artery /groups/ according to the patient’s gender the majority of patients, who underwent an examination, were between 55 and 64 years of age (13,642 or 35,1%); of these 6,679 had carotid artery stenosis ranging from 20 to 49%. every fourth patient (10,207 or 26,3%) was older than 64, and 779 of them had carotid artery stenosis ranging from 50-69% (70,4% of all patients had carotid artery stenosis ranging from 50-69%); in 189 patients carotid artery stenosis was between 70-99% (70,0% of all patients had carotid artery stenosis between 70 and 99%), and 59 patients had complete blockage of the carotid artery (59,6% of all patients with complete blockage of the carotid artery). the patients who belonged to young age categories had smaller carotid artery stenosis (table 4). table 4 degree of carotid artery stenosis /groups/ according to the patients’ age age group carotid artery stenosis (%) total 0-19 20-49 50-69 70-99 100 <= 24 n 264 2 0 0 0 266 % 1.2% 0.0% 0.0% 0.0% 0.0% 0.7% 25 34 n 1638 6 0 0 0 1644 % 7.7% 0.0% 0.0% 0.0% 0.0% 4.2% 35 44 n 3915 212 1 1 0 4129 % 18.3% 1.3% 0.1% 0.4% 0.0% 10.6% 45 54 n 6895 2036 33 5 6 8975 % 32.2% 12.7% 3.0% 1.9% 6.1% 23.1% 55 64 n 6560 6679 294 75 34 13642 % 30.6% 41.8% 26.6% 27.8% 34.3% 35.1% >= 65 n 2136 7044 779 189 59 10207 % 10.0% 44.1% 70.4% 70.0% 59.6% 26.3% total n 21408 15979 1107 270 99 38863 % 55.1% 41.1% 2.8% 0.7% 0.3% 100.0% gender and age structure as risk factors of carotid artery stenosis and specific themes areas… 117 degree of carotid artery stenosis /groups/ according to the patients’ age is shown in fig. 3. fig. 3 degree of carotid artery stenosis /groups/ according to the patients’ age groups 4 creation of thematic maps of the carotid artery thematic cartography is a cartographic discipline that enables presentation of spatial arrangement of objects, phenomena and processes that are under study. the geographic information system development ensured simpler collecting, processing and visualizing of spatial and associated data. the geographic information systems (giss) and spatial analysis techniques are powerful tools for describing epidemiological patterns, as well as for detecting, explaining and predicting clusters of diseases in space and time (grobusch et al., 2016). the gis application to mapping anatomic features and clinical events has been infrequent in the gis and medical literature (garb et al., 2007). the greatest potential of the gis is its ability to clearly show the results of complex analyses through maps (mullner et al., 2004). unlike tables and spreadsheets with seemingly endless numbers, maps produced by the gis have the ability to transform data into information that can be quickly and easily communicated. likewise, these systems also extend the range of problems that can this technology can help solving by allowing the users to more efficiently deal with complex problems (melnick&flemming, 1999; preradović et al., 2017). the creation of thematic maps of the carotid artery stenosis (blockage) is done using software of the company ersi, arcgis 10.2. based on data basis. arcgis uses an preradovic et al./decis. mak. appl. manag. eng. 1 (2) (2018) 111-120 118 object-relational data base. simple tables and defined types of attributes allow the storage of spatial data, and sql (structural query language) enables creating, modifying and querying the tables. data are saved in shapefile format. geometry of the object in .shp file can be presented by a dot, line or polygon. apart from the data on geometry, .shp file also contains attributive table which stores descriptive information, such as: the name of municipality, postcode, etc. spatial objects (political borders of municipalities in the republic of srpska) were used as spatial references for the carotid artery blockage presentation. the borders of municipalities are presented by polygons in .shp format. the cartogram method is used to show prevalence of a certain degree of the carotid artery blockage by the patients’ age groups while the average age of population is presented by the coloring method with the category borders defined by the method of natural borders. data on patients’ age and carotid artery blockage are downloaded in .xlsx format. the carotid artery blockage is shown by percentage and sorted in 5 categories (0-19, 20-49, 5079, 80-99, 100). average age of population is downloaded from the official site of the 2013 census of population, households and dwellings in bosnia and herzegovina in .xlsx format [10]. as the data in their original form were not suitable for further processing, they were harmonized and sorted. sorted data were saved in .csv format. .csv format stores tabular data as plain text and ensures data exchange between different programs and, therefore, it is used in this paper. connecting spatial and statistical data is carried out on the basis of mutual field (name of the municipality), by using option join. fig. 4 shows carotid artery blockages (separately for each category of carotid artery blockage and age group) by municipalities in the republic of srpska. gender and age structure as risk factors of carotid artery stenosis and specific themes areas… 119 fig. 4 carotid artery blockage by municipalities in the republic of srpska fig. 5 shows percentages of patients with carotid artery stenosis higher than 50% by municipalities in the republic of srpska. preradovic et al./decis. mak. appl. manag. eng. 1 (2) (2018) 111-120 120 fig. 5 percentage of patients with carotid artery stenosis higher than 50% by municipalities in the republic of srpska 5 conclusion on the basis of these results, it is evident that the minimal (0 to 19%) carotid artery stenosis in percentage (in relation to the number of examined patients) is more prevalent in female patients, and while the carotid artery stenosis which needs to be treated (conservatively and/or surgically) is more prevalent in male patients. the created epidemiological patterns indicate that the examinees in certain regions (cities and municipalities) have a high risk of a stroke. in accordance with the obtained and presented research results, it is necessary to do an analysis of equipment of medical institutions in vulnerable regions, purchase additional medical equipment and educate health care workers and population, with the aim of reducing the risk of this, very common, disease, with a high mortality rate, whose consequences are very severe – for the patient, family and whole society. references primatesta, p., allender, s., ciccarelli, p., doring, a., graff-iversen, s., holub, j., panico, s., trichopoulou, a. &verschuren, w.m. (2007). cardiovascular surveys: manual of operations. european journal of cardiovascular prevention & rehabilitation, 14, 53–61. thom, t., haase, n., rosamond, w., howard, v.j., rumsfeld, j., manolio, t.,zheng,z.j., flegal, k., o'donnell, c., kittner, s., lloyd-jones, d. jr goff, d.c., hong, y., adams, r., friday, g., furie, k., gorelick, p., kissela, b., marler, j.,meigs, j., roger, v., sidney, s., sorlie, p., steinberger, j., wasserthiel-smoller, s., wilson, m. &wolf, p. (2006). heart disease and stroke statistics–2006 update: a report from the american heart association statistics committee and stroke statistics subcommittee.circulation, 113, 85–151. autret, a., pourcelot, l., saudeau, d., marchal, c., bertrand, p. &de boisvilliers, s. (1987).stroke risk in patients with carotid stenosis. lancet. 1, 888–890. inzitari,d., eliasziw, m., gates, p., sharpe, b.l., chan,r.k., meldrum,h.e.& barnett,h.j. (2000). the causes and risk of stroke in patients with asymptomatic internal-carotidartery stenosis. north american symptomatic carotid endarterectomy trial collaborators. the new england journal of medicine, 342, 1693–1700. hennerici, m.,hulsbomer,h.b.,hefter, h., lammerts, d. &rautenberg, w. (1987). natural history of asymptomatic extracranial arterial disease: results of a long-term prospective study. brain, 110, 777–791. norris,j.w., zhu,c.z., bornstein,n.m.& chambers,b.r. (1991). vascular risks of asymptomatic carotid stenosis. stroke, 22, 1485–1490. o'holleran,l.w., kennelly,m.m.,mcclurken,m.&johnson j.m. (1987). natural history of asymptomatic carotid plaque: five year follow-up study. the american journal of surgery, 154, 659–662. gender and age structure as risk factors of carotid artery stenosis and specific themes areas… 121 đajić, v.,miljković, s., preradović,lj.,vujković, z. & račić, d. (2015).uticajživotnogdobaipolanakarotidnuasimptomatskubolest, scriptamedica, 46, 42-47. popisbih. (2013) http://www.popis.gov.ba/popis2013/doc/rezultatipopisa_sr.pdf (access 17.02.2017) rezultatipopisa u bih (2013). http://www.rzs.rs.ba/front/article/2369/ (access 17.02.2017) grobusch, m.,grillet, m., tami, a. (2016). applying geographical information systems (gis) to arboviral disease surveillance and control: a powerful tool.travel medic and infectious disease, 14, 9-10. garb,j.ganai, s., skinner, r., boyd, c. & wait, r. (2007). using gis for spatial analysis of rectal lesions in the human body. international journal of health geographics, 6, 6-11. mullner, r., chung, k.,croke, k. & mensah, e. (2004). geographic information system in public health and medicine. journal of medical systems, 28(3), 215-221. melnick, a.l.&flemming,d.w. (1999). modern geographic information systemspromise and pitfalls. journal of public health management and practice, 5(2), 8-22. preradović,lj.,jakovljević, g. &perišić, m. (2017). creating epidemiological patterns of connection between risk factors and particular. proceedings of icmnee 2017, regional association for security and crisis management and european centre for operational research, 1, 230-241. © 2018 by the authors. submitted for possible open access publication under the terms and conditions of the creative commons attribution (cc by) license (http://creativecommons.org/licenses/by/4.0/). http://www.popis.gov.ba/popis2013/doc/rezultatipopisa_sr.pdf http://www.rzs.rs.ba/front/article/2369/ plane thermoelastic waves in infinite half-space caused decision making: applications in management and engineering vol. 1, number 2, 2018, 81-92 issn: 2560-6018 eissn: 2620-0104 doi: https://doi.org/10.31181/dmame1802079s * corresponding author. e-mail addresses: sremacs@uns.ac.rs (s. sremac), ilijat@uns.ac.rs (i. tanackov), miloskopic@uns.ac.rs (m. kopić), radovic93@yahoo.com (d. radović) anfis model for determining the economic order quantity siniša sremac1*, ilija tanackov1, miloš kopić1, dunja radović2 1 faculty of technical sciences, university of novi sad, novi sad, serbia 2 faculty of transport and traffic engineering, university of east sarajevo, doboj, bosnia and hercegovina received: 4 april 2018; accepted: 27 august 2018; available online: 30 august 2018. original scientific paper abstract: the determination of the economic order quantity is important for the rational realization of the logistics process of transport, manipulation and storage in the supply chain. in this paper an expert model for the determination of the economic order quantity has been developed. the model has been developed using the hybrid method of artificial intelligence adaptive neuro-fuzzy inference systems anfis. it has been used for modeling a complex logistics process in which it is difficult to determine the interdependence of the presented variables applying classical methods. the hybrid method has been applied to take advantages of the individual methods of artificial intelligence: fuzzy logic and neural networks. experience of an experts and information on the operations of the company for a certain group of items have been used to form the model. analysis of the validity of the model results was performed on the basis of the average relative error and it has showed that the model imitates the work of the expert in the observed company with great accuracy. sensitivity analysis has been applied which indicates that the model gives valid results. the proposed model is flexible and can be applied to various types of goods in supply chain management. key words: adaptive neuro-fuzzy inference systems, economic order quantity, supply chain management, logistics processes. 1. introduction the economy is largely in the phase of intense globalization. this does not mean only increasing the interdependence of regional economies and levels of technological integration, but also significant structural changes in the field of science, highly developed technique and its way of functioning. scientific and technological progress, in coordination with economic development, covers all areas mailto:sremacs@uns.ac.rs mailto:ilijat@uns.ac.rs mailto:miloskopic@uns.ac.rs mailto:radovic93@yahoo.com sremac et al./decis. mak. appl. manag. eng. 1 (1) (2018) 81-92 82 of the economy and its possibilities are used in the search for solutions for better organization and efficiency of flow of goods (sremac, 2013). the determination of the economic order quantity (eoq) is a logistics process that has a significant influence on the successful operation of a company (melis teksan & geunes, 2016). from the logical aspect, the determination of the eoq requires an adequate attention, since inadequate purchase can additionally burden the company’s business (abraham, 2001). on the other hand, in order to achieve a high level of service for the client, all purchase should be realized independently of their value (maddah & noueihed, 2017). many phenomena in nature, society and the economy cannot be described and it is not possible to predict their behavior by traditional mathematical methods (griffis et al., 2012). due to the lack of flexibility of this approach, the human factor compensates for the uncertainty of mathematical model using knowledge based on experience (negnevitsky, 2005) and make decisions based on data that are difficult to enter into a mathematical model (efendigil, 2014). a modern approach to determining eoq is the application of adaptive neuro-fuzzy inference systems (anfis), as one of the hybrid methods of artificial intelligence. the basic hypothesis of this paper is that it is possible to design a model on hybrid neuro-fuzzy approach of artificial intelligence to determine eoq. the next goal is to effectively use such a system in the observed company in a highly dynamic and changing business environment. one of the objectives is that the proposed system shall be flexible and applicable in other companies for other types of goods in supply chain management (scm). the basic motive for the design of such a decision support system is the development of the tool for eoq that will be able to perform complex and real processes of scm using a hybrid artificial intelligence technique. the rest of this paper is organized as follows. the relevant literature review is classified and reviewed in section 2. section 3 describes anfis used in the proposed methodology. section 4 presents proposed models and a sensitive analysis for different membership functions. conclusion remarks are drawn in section 5. 2. literature review the problem often arising and being examined is determining the amount of goods needed to meet customers' demands (lagodimos et al., 2018). a century ago, harris (1913) introduced eoq inventory model. most of the companies apply eoq model to determine the maximum level of inventory or ordering lot size (abdelaleem et al., 2017). the application of classical methods for eoq is based on limited assumptions that cannot cover the nature of modern complex logistics processes such as demand is constant in unit time, lead time is deterministic and stationary, constant price etc. (maddah & noueihed, 2017). but, making decisions in scm takes place in an environment where objectives and constraints are not and cannot often be precisely defined (latif et al., 2014; taleizadeh et al., 2016). therefore a certain approximation is required in order to obtain a high quality model of a real system where the application of artificial intelligence has an important role. consequently, individual methods of artificial intelligence (keshavarz ghorabaee et al., 2016) or their combination in the form of hybrid method are increasingly used in solving real and complex problems (teksan & geunes, 2016, zavadskas et al., 2016). some researchers (davis-sramek & fugate, 2007) interviewed a few visionaries in the field of scm and recognized the irresistible call of these individuals for modeling and simulation to be involved in the research (wallin et al., 2006). modeling of the anfis model for determining the economic order quantity 83 scm seeks for the best possible system configurations to minimize costs and increase operational efficiency in order to meet customer expectations (bowersox et al., 2010). important issue in scm is the need to make the right decision, despite the occurrence of significant ambiguity (giannoccaro et al., 2003). in addition to fluctuations in demand and delivery times, vagueness is associated with the lack of information from the production and distribution processes in scm (chatfield et al., 2013). some authors expressed uncertainty of market demand and inventory costs in the model theory of fuzzy sets (azizi et al., 2015). hereinafter, there is a review of some works from the field of scm based on neurofuzzy approach. jang (1993) first introduced the anfis method by embedding the fuzzy inference system into the framework of adaptive networks. demand uncertainty is considered in the optimization model of gupta and maranas (2003) in which by a two-stage stochastic programming model they consider all production decisions in the first stage and all the supply chain decisions in the second. yazdanichamzini et al. (2012) used anfis and artificial neural network (ann) model for modeling the gold price. guneri et al. (2011) developed a new method using anfis for the supplier selection problem. vahdani et al. (2012) presented numerous quantitative methods for supplier selection and evaluation in the literature, where the most current technique is hybrid approaches. later ozkan and inal (2014) employed anfis in supplier selection and evaluation process. several methods for eoq in scm have appeared in literature, including approaches based on a neuro-fuzzy (yazdani-chamzini et al., 2017). paul et al. (2015) presents the application of anfis and ann in inventory management problem to determine optimum inventory level. abdel-aleem et al. (2017) study and analyze the optimal lot size in a real production system to obtain the optimal production quantity. anfis has a wide application in the fields of finance, marketing, distribution, business planning, information systems, production, logistics etc. (ambukege et al., 2017; mardani et al., 2017; rajab & sharma, 2017). the route guidance system developed by pamučar & ćirović (2018) is an adaptive neuro fuzzy inference guidance system that provides instructions to drivers based upon "optimum" route solutions. 3. description adaptive neuro-fuzzy inference systems anfis are the modern class of hybrid systems of artificial intelligence. they are described as artificial neural networks characterized by fuzzy parameters. by combining two different concepts of artificial intelligence it is tried to exploit the individual strengths of fuzzy logistics and artificial neural networks in hybrid systems of homogeneous structure (figure 1). such engineered systems are increasingly used to solve everyday complex problems and with assistance of logistics experts and historical data, this approach can be designed on the basis of computer aided systems. sremac et al./decis. mak. appl. manag. eng. 1 (1) (2018) 81-92 84 figure 1. basic characteristics of fuzzy logistics and neural networks the possibility of displaying the fuzzy model in the form of a neural network is most often used in the methods of automatic determination of the parameters of the fuzzy model based on the available input-output data. the structure of adaptive neuro-fuzzy inference systems is similar to the structure of neural networks. the membership functions of the input data are mapped to the input data of the neural networks and the input-output laws are defined through the output data of the neural networks (figure 2). figure 2. the basic structure of adaptive neuro-fuzzy inference systems parameters characteristic of the corresponding membership functions change through the network learning process. calculation of these parameters is usually done on the basis of the gradient of the vector, which is a measure of the accuracy of the transfer of the fuzzy inference system of the input set into the output set for the given set of verified parameters (cetisli, 2010). basic idea of the adaptive neuro-fuzzy inference system is based on fuzzy modelling and learning methods according to the given dataset. based on the inputoutput data set, an appropriate fuzzy inference system is formed and the parameters of the membership function are calculated. the parameters of the membership functions of the fuzzy system are set using the backpropagation algorithm or a combination of the algorithm and the method of least squares. this setting allows fuzzy systems to learn on the basis of input-output data set. this learning method is similar to the method of learning neural networks. anfis model for determining the economic order quantity 85 4. the development of anfis model for determining eoq 4.1. designing the model this paper develops an adaptive neuro-fuzzy inference system model for determining the economic order quantity (anfis model eoq) based on the inputoutput data in the observed company. the formation of the proposed model consists of the following steps:  determination of input-output data set in the form customized for training of the neuro-fuzzy inference system.  the model structure with parameters is assumed, which by the rules reflects the input membership functions into output functions. the model is trained on the training data. in doing so, the parameters of the membership functions are modified according to the selected error criterion in order to get the valid model results. this way of modeling is appropriate if the training data are fully representative for all the properties that anfis model should have. in some cases, the data used to train the network contain measurement errors so they are not fully representative for all features that should be included in the model. therefore, the model should be checked using the testing data. there are two ways of testing the model. the first way is to check the model when input data are those that are not used for training. this procedure shows how accurately the model predicts the output value set and it is implemented in the paper. another way to test the model is a mathematical procedure when the data that were used for training are now used as a data set for testing and it is necessary to obtain the output with a minimal error. the model presented here was developed in the matlab version r2007b using anfis editor, included in the fuzzy logic toolbox. anfis editors only support sugeno-type fuzzy systems (tahmasebi & hezarkhani, 2010). benefits of sugeno type are that it is computationally more efficient, suitable for mathematical analysis, works well with linear, optimization and adaptive techniques. the course of the anfis model formation is presented in figure 3. figure 3. the model formation flowchart the anfis model eoq has the following structure. the input variables are: the size of demand, the level of inventory and price, while the output variable is eoq. the number of membership functions of the input variables is three, except for the input variable the size of demand which has five values. input membership functions are gaussian. the structure of the neural network is shown in figure 4. sremac et al./decis. mak. appl. manag. eng. 1 (1) (2018) 81-92 86 figure 4. fuzzy model mapped into a neural network the developed model has the form of a multilayer neural network with the propagation of the signal forwards. the first layer represents the input variable, the hidden (middle) layer represents the fuzzy rule, and the third layer is the output variable. fuzzy sets are defined in the form of link weights between nodes. settings are performed in adaptive nodes to reduce the error that occurs at the exit of the model. the error is the difference between the known output values and the values obtained at the exit from the neuro-fuzzy network. the signals on the network are spreading forwards and the bugs are spreading backwards. thus, the output numerical value approaches the optimal, i.e. the required value. the basic characteristics of the model are shown in table 1. table 1. basic characteristics of anfis model eoq the key model characteristics are: number of nodes 118 number of linear parameters 45 number of nonlinear parameters 22 total number of parameters 67 number of training data pairs 50 number of testing data pairs 10 number of fuzzy rules 45 the data set for the training of the neural network was obtained on the basis of concrete data on business operations and the survey of the logistics expert in the observed company. for training (figure 5), a hybrid optimization method was used consisting of: • backpropagation algorithms, by which the errors of variables are determined recursively from the output layer to the input layers • the methods of least squares for determining the optimal set of consequential parameters. anfis model for determining the economic order quantity 87 figure 5. training of the neural network in order to train the network, 50 input-output procurement data sets were used in the observed company, while model testing was conducted on the basis of 10 inputoutput data sets. a grid partition technique was applied to generate one model output and a hybrid optimization method as well. it was assumed that the output membership functions are of a constant type. the number of training cycles (epochs) is 500. at the output of the neural network, there is an error of 2.15 (figure 6). figure 6. results of training of anfis model eoq after the training phase, the anfis model eoq was tested on the basis of 10 inputoutput datasets, which were not used in the training of the model. the average error in testing the model is 4.03 (figure 7). sremac et al./decis. mak. appl. manag. eng. 1 (1) (2018) 81-92 88 figure 7. results of testing of anfis model eoq testing makes it possible to check the functioning of the model. output data, generated by the network, are compared with known company data. the model is not expected to function without an error, but deviations must be within the limits of the predicted tolerance. if there are large deviations, a new training network needs to be done, or it is sometimes necessary to exclude problematic data. the validity analysis of the model's results was carried out on the basis of the average relative error of the tested data (figure 8). on the basis of the testing of 10 examples of eoq determination, an average relative error of 3.28% was obtained. on the basis of this analysis it can be said that anfis model eoq gives valid results. figure 8. relative error of anfis model eoq in % 4.2. sensitive analysis one of the basic requirements when modeling is to achieve a satisfactory sensitivity of the model. this means that with certain small changes in input variables, the output from the model must also have small changes in value. the sensitive analysis of the anfis model eoq was carried out by changing the shape of the membership functions of the input variables and the number of values of the input variables as well. instead of the gaussian curves applied in the basic model, triangular, trapezoidal and bell-shaped curves were tested (table 2). in the analysis the "prod "(product of array elements) method was used for the operator "and" and "prob" (probably) method for the operator "or". two cases were tested: first, where all input variables have three values, and the other one where the first input variable, size of demand, has five values, while the other two input variables, the level of inventory and price, have three values (table 3). anfis model for determining the economic order quantity 89 table 2. sensitive analysis by changing the form of membership functions membership function triangular trapezoidal bell eoq 120 124 125 42 32 24 220 225 228 60 57 59 132 133 135 table 3. sensitive analysis by changing the number of input values* membership function triangular trapezoidal bell number of the variable values 3-3-3 5-3-3 3-3-3 5-3-3 3-3-3 5-3-3 training error 4,16 2,21 8,40 2,82 3,55 1,77 testing error 7,02 6,58 8,56 6,99 6,36 2,83 * number of epochs is set to 500. for defined cases of model sensitivity testing, the obtained results are the same or with negligible differences. this shows that the proposed anfis model eoq gives valid results. 5. conclusion the applied concept of artificial intelligence is utilized for presenting, manipulating and implementing human knowledge on the efficient management for determining the economic order quantity. adaptive neuro-fuzzy inference systems has proven to be a valuable artificial intelligence concept in determining eoq that is designed using intuition and assessment of a logistics expert. hybrid concept of artificial intelligence enabled the explanation of the system dynamics via a linguistic presentation of knowledge on a logistics process. it was used for modeling a complex linguistic system in which it is difficult to determine the interdependence of the presented variables applying other classical methods. in the paper, anfis model eoq for solving a concrete problem in a business practice was developed, following the tendency in contemporary scientific research. the model was tested and verified, and hence it can be practically applied. a sensational analysis was conducted and it gave the results of a model with negligible differences. the advantage of the proposed model is that with some minor modification, it can be applied in any company dealing with the flow of goods realization. during the research it was observed that in addition to the advantages, the applied hybrid concept of artificial intelligence also had certain flaws, and that none of the tools was universally applicable. the observed flaws are that the selection and adjustment of the membership functions of the variables are very sensitive area that has a significant impact on the results of the model. therefore, it is necessary to precisely and carefully form the logical base of the fuzzy rules. during development of the model, the neuro-fuzzy training time usually requires a large amount of data and can be very long, and therefore the need for frequent repetitions of training can make sremac et al./decis. mak. appl. manag. eng. 1 (1) (2018) 81-92 90 the application unusable. a small number of input parameters gives rough and inaccurate results, so the survey sample must be representative. in further research, current methods of multiple-criteria decision-making can be applied (pamučar et al., 2018; stević et al., 2017, yazdani-chamzini et al., 2017) and the flexibility of the proposed model can be used for determining the amount of procurement of other types of goods. acknowledgement: the paper is a part of the research done within the project tr 36012. the authors would like to thank to the ministry of science and technology of serbia. references abdel-aleem, a., el-sharief, a. m., hassan a. m., & g. el-sebaie, g. m. (2017). implementation of fuzzy and adaptive neuro-fuzzy inference systems in optimization of production inventory problem. applied mathematics & information sciences, 11, 289–298. abraham, a., (2001). neuro fuzzy systems: state-of-the-art modeling techniques. lecture notes in computer science, springer-verlag, 269-276, 2001. ambukege, g., justo, g., & mushi, j. (2017). neuro fuzzy modelling for prediction of consumer price index. international journal of artificial intelligence and applications, 8, 33–44. azizi, a., bin ali, a.y., & ping, l.w. (2015). modelling production uncertainties using the adaptive neuro-fuzzy inference system. south african journal of industrial engineering, 26, 224–234. bowersox, d.j., closs, d.j., & cooper m.b. (2010). supply chain logistics management. new york: mcgraw-hill. cetisli, b. (2010). development of an adaptive neuro-fuzzy classifier using linguistic hedges: part 1, expert systems with applications, 37, 6093-6101. chatfield, d.c., hayya j.c., & cook, d.p. (2013). stockout propagation and amplification in supply chain inventory systems. international journal of production research, 51, 1491–1507. davis-sramek, b., & fugate, b. (2007). state of logistics: a visionary perspective. journal of business logistics, 28, 1–34. efendigil, t. (2014). modelling product returns in a closed-loop supply chain under uncertainties: a neuro fuzzy approach. journal of multiple-valued logic and soft computing, 23, 407–426. giannoccaro, i., pontrandolfo, p., & scozzi, b. (2003). a fuzzy echelon approach for inventory management in supply chains. european journal of operational research, 149, 185–196. guneri, a. f., ertay, t., & yucel, a. (2011). an approach based on anfis input selection and modeling for supplier selection problem. expert systems with applications, 38, 14907–14917. anfis model for determining the economic order quantity 91 gupta, a., & maranas, c. d. (2003). managing demand uncertainty in supply chain planning, computers and chemical engineering, 27, 1219–1227. griffis, s.e., bell, j.e., & closs d.j., (2012). metaheuristics in logistics and supply chain management. journal of business logistics, 33(2), 90-106. harris, f. w. (1913). how many parts to make at once. factory, the magazine of management, 10, 135–136. jang, j. s. r. (1993). anfis: adaptive-network-based fuzzy inference system. ieee transactions on systems man and cybernetics, 23, 665–685. keshavarz ghorabaee, m., zavadskas, e.k., amiri, m., & antucheviciene, j. (2016). a new method of assessment based on fuzzy ranking and aggregated weights (afraw) for mcdm problems under type-2 fuzzy environment. economic computation and economic cybernetics studies and research, 50, 39–68. lagodimos, a.g., skouri, christou, i.t., & chountalas, p.t. (2018). the discrete-time eoq model: solution and implications. european journal of operational research, 266,112-121. latif, h. h., paul, s. k., & azeem, a. (2014). ordering policy in a supply chain with adaptive neuro-fuzzy inference system demand forecasting. international journal of management science and engineering management, 9, 114–124. maddah, b., & noueihed, n. (2017). eoq holds under stochastic demand, a technical note. applied mathematical modelling, 45, 205-208. mardani, a., zavadskas, e.k., khalifah, z., zakuan, n., jusoh, a., nor, k.m., & khoshnoudi, m. (2017). a review of multi-criteria decision-making applications to solve energy management problems: two decades from 1995 to 2015. renewable & sustainable energy reviews, 71, 216–256. melis teksan z., & geunes j. (2016). an eoq model with price-dependent supply and demand. international journal of production economics, 178, 22-33. negnevitsky, m., (2005). artificial intelligence: a guide to intelligent systems. 2nd edition, edinburgh gate, england. ozkan, g., & inal, m. (2014). comparison of neural network application for fuzzy and anfis approaches for multi-criteria decision making problems. applied soft computing, 24, 232–238. pamučar, d., & ćirović, g. (2018). vehicle route selection with an adaptive neuro fuzzy inference system in uncertainty conditions, decision making: applications in management and engineering, 1, 13-37. pamučar, d., petrović, i., & ćirović, g. (2018). modification of the best-worst and mabac methods: a novel approach based on interval-valued fuzzy-rough numbers. expert systems with applications, 91, 89-106. paul, s. k., azeem, a., & ghosh, a. k. (2015). application of adaptive neuro-fuzzy inference system and artificial neural network in inventory level forecasting. international journal of business information systems, 18(3), 268–284. sremac et al./decis. mak. appl. manag. eng. 1 (1) (2018) 81-92 92 rajab, s., & sharma, v. (2017). a review on the applications of neuro-fuzzy systems in business, artificial intelligence review, springer, 1–30. sremac, s., (2013). goods flow management model for transport-storage processes, phd dissertation, faculty of technical sciences, university of novi sad. stević, ž., pamučar, d., vasiljević, m, stojić, g., & korica, s. (2017). novel integrated multi-criteria model for supplier selection: case study construction company. symmetry, 9, 279. taleizadeh, a. a., khanbaglo, m. p. s., & cárdenas-barrón, l. e. (2016). an eoq inventory model with partial backordering and reparation of imperfect products, international journal of production economics, 182, 418–434. tahmasebi, p., & hezarkhani, a., (2010). application of adaptive neuro-fuzzy inference system for grade estimation; case study, sarcheshmeh porphyry copper deposit, kerman, iran. australian journal of basic and applied sciences, 4, 408–420. teksan, z. m., & geunes, j. (2016). an eoq model with price-dependent supply and demand, international journal of production economics, 178, 22–33. vahdani, b., iranmanesh, s. h., meysam mousavi, s., & abdollahzade m. (2012). a locally linear neuro-fuzzy model for supplier selection in cosmetics industry, applied mathematical modelling, 36, 4714-4727. wallin, c., rungtusanatham, m.j., & rabinovich, e. (2006). what is the right inventory management approach for a orderd item. international journal of operations & production management, 26, 50–68. yazdani-chamzini, a., yakhchali, s.h., volungevičienė, d., & zavadskas, e.k. (2012). forecasting gold price changes by using adaptive network fuzzy inference system. journal of business economics and management, 13, 994–1100. yazdani-chamzini, a., zavadskas, e.k., antucheviciene, y., & bausys, r. (2017). a model for shovel capital cost estimation, using a hybrid model of multivariate regression and neural networks. symmetry, 9, 298–311. zavadskas, e.k., govindan, k., antucheviciene, j., & turskis, z. (2016). hybrid multiple criteria decision-making methods: a review of applications for sustainability issues. economic research-ekonomska istraživanja, 29, 857–887. © 2018 by the authors. submitted for possible open access publication under the terms and conditions of the creative commons attribution (cc by) license (http://creativecommons.org/licenses/by/4.0/). plane thermoelastic waves in infinite half-space caused decision making: applications in management and engineering vol. 1, issue 2, 2018, pp. 121-130 issn: 2560-6018 eissn: 2620-0104 doi: https://doi.org/10.31181/dmame1802128l * corresponding author. e-mail addresses: feng.liu@gmail.com (f. liu), gaiwuzh@126.com (g. aiwu) lukovacvesko@yahoo.com (v. lukovac), milena.vukic12@gmail.com (m. vukic) a multicriteria model for the selection of the transport service provider: a single valued neutrosophic dematel multicriteria model feng liu1*, guan aiwu2, vesko lukovac 3, milena vukić4 1 business school, zhejiang wanli university, ningbo, china 2 school of management, jiangsu university, zhenjiang, china 3 university of defence in belgrade, military academy, department of logistics, belgrade, serbia 4 the college of hotel management, belgrade, serbia received: 5 april 2018; accepted: 2 september 2018; available online: 2 september 2018. original scientific paper abstract: the decision-making process requires, a priori, defining and considering certain factors, especially when it comes to complex areas such as transport management in companies. one of the most important items in the initial phase of the transport process that significantly influences its further flow is decision-making about the choice of the most favorable transport provider. in this paper a model for evaluating and selecting a transport service provider based on a single valued neutrosophic number (svnn) is presented. the neutrosophic set concept represents a general platform that extends the concepts of classical sets, fuzzy sets, intuitionistic fuzzy sets, and an interval valued intuitionistic fuzzy sets. the application of the svnn concept made a modification of the dematel method (decision-making trial and evaluation laboratory method) and proposed a model for ranking alternative solutions. the svnn-dematel model defines the mutual effects of the provider's evaluation criteria, while, in the second phase of the model, alternative providers are evaluated and ranked. the svnn-dematel model was tested on a hypothetical example of evaluation of five providers of transport services. key words: multicriteria decision-making, dematel, single valued neutrosophic numbers, provider selection mailto:feng.liu@gmail.com mailto:gaiwuzh@126.com mailto:lukovacvesko@yahoo.com mailto:milena.vukic12@gmail.com liu et al./decis. mak. appl. manag. eng. 1 (2) (2018) 121-130 122 1. introduction outsourcing approach is widely present in all logistic aspects of business, especially in the transport domain, which is distinguished by its significant and direct participation in overall logistics costs. after making a decision on accepting outsourcing for certain logistical activities of the organization, the management is facing the issue of selecting the provider that will implement these activities for the organization needs. the problem of selecting a transport service provider is conceptually similar to the choice of providers in most other logistics activities. in that sense, when it comes to the models of selection of the transport service provider, of relevance are those research studies that are focused on the selection of carrier, suppliers, vendor, or independent logistics providers (third party logistics provider selection). regardless of differences in the views on the structuring of the providers selection problem (ordoobadi & wang, 2011; shen yu, 2012), as well as on the structure of the selection process itself (snir & hitt, 2004; monczka et al., 2005; cao & wang, 2007), when it comes to the nature of this process, its multidimensional character is often mentioned (vinodh et al., 2011; senthil et al., 2014). in that sense, numerous multicriteria decision-making methods have been used to select providers. various examples of combining different approaches that treat uncertainty (fuzzy access, etc.) with traditional multicriteria techniques, such as topsis (zouggari & benyoucef, 2011; senthil et al., 2014), vikor (sanayei et al., 2010), ahp (singh & sharma, 2011; senthil et al., 2014), anp (nobar et al., 2011) etc. can be found in the literature. an example of the dematel method application to the recognition of the relevant criteria as well as to the identification of their significance and causal relationships in the process of structuring a model for the supplier selection with carbon management competencies can be seen in (hsu et al., 2011). as can be seen in a review of the referential literature given here, most approaches prefer the use of traditional multicriteria decision-making (mcdm) models in combination with fuzzy techniques (senthil et al., 2014). however, in the real world, the decision-maker may prefer attribute assessment by using linguistic variables instead of crisp values either due to his partial knowledge about attributes or the lack of information from the problem domain. the fuzzy set presented by zadeh (1965) is one of the tools used to present such imprecision in mathematical form. however, the fuzzy set can focus only on the degree of affiliation of unclear parameters or events. the fuzzy set cannot represent the degree of non-affiliation and the degree of imprecision of uncertainty parameters. in order to partially overcome the difficulties in defining parameters that are imprecise, atanassov (1986) introduced intuitionistic fuzzy sets (ifs) that are characterized by the degree of affiliation and non-affiliation simultaneously. however, in the ifs, the sum of the affiliation degree and non-affiliation degree of the unclear parameter is less than one (unity). in order to eliminate these shortcomings, smarandache (1999) introduced a neutrosophic concept in order to deal with unspecified or inconsistent information that usually exists in reality. the concept of a neutrosophic set represents a general platform that extends the concepts of classical sets, fuzzy sets (zadeh, 1965), intuitionistic fuzzy sets (atanassov, 1986), and interval valued intuitionistic fuzzy sets (atanassov & gargov, 1989). unlike intuitionistic fuzzy sets and interval valued intuitionistic fuzzy sets, in the neutrosophic set indeterminacy is explicitly characterized. using the advantages of the neutrosophic sets mentioned above, the original svnn-dematel model for the transport service provider evaluation was proposed in this paper. in the next section of work (section 2), the basic items of the svnn are multicriteria model for the selection of the transport service provider: single valued… 123 presented. thereafter, in the third section of the paper, an original vko model based on svnn was presented. testing of the presented model was performed in the fourth section of the work. 2. neutrosophic sets according to the definition of a neutrosophic set, neutrosophic set a is a universal set x characterized by function of affiliation describing truth-membership function ta(x), indeterminacy-membership function ia(x) and the function of falsitymembership fa(x). where ta(x), ia(x) and fa(x) are real standard or non-standard subsets of [-0,1+], each of the three neutrosophic components satisfy the condition that ta(x)→ [-0,1+], ia(x)→ [-0,1+] and fa(x)→ [-0,1+]. set ia(x) can be used to present not only indeterminacy, but also unclearness, uncertainties, inaccuracies, errors, contradictions, the undefined, the unknown, incompleteness, redundancy, etc. (biswas et al, 2016). in order to cover all unclear information, the degree of affiliation to the indeterminacy-membership degree can be subdivided into sub-components, such as "contradiction," "uncertainty," and "unknown" (smarandache, 1999). the sum of these three neutrosophic set affiliation functions ta(x), ia(x) and fa(x) should satisfy the following condition 0 ( ) ( ) ( ) 3a a at x i x f x       (biswas et al, 2016). the component of neutrosophic set a for all values x x is determined by ac so that ( ) 1 ( )ca at x t x    , ( ) 1 ( )ca ai x i x    and ( ) 1 ( )ca af x f x    . neutrosophic set a is contained in another neutrosophic set b ( a b ) if and only if for each value x x the following conditions are satisfied inf ( ) inf ( )a bt x t x , sup ( ) sup ( )a bt x t x , inf ( ) inf ( )a bi x i x , sup ( ) sup ( )a bi x i x , inf ( ) inf ( )a bf x f x , and sup ( ) sup ( )a bf x f x . single valued neutrosophic sets (svns) are a special case of the neutrosophic set that can be used more successfully in modern scientific and engineering applications, compared to the classical neutrophic set. basic arithmetic operations on svnn that are significant for the mathematical background of the mcdm model can be looked in detail in (wang et al., 2010; deli & şubaş, 2017). 3. single valued neutrosophic dematel method the dematel method is a very suitable tool for designing and analyzing the structural model. and it can be achieved through the definition of cause-effect relationships between factors that are complex (pamučar & ćirović, 2015; gigović et al., 2016). in order to comprehensively take into account the imprecision that exists in group decision-making, this paper performs a modification of the dematel method by using the svns. in the next section the steps of the svn-dematel method are elaborated, namely: step 1: factors expert analysis. assuming that there are m experts and n factors (criteria) that are observed, each expert should determine the degree of influence of factor i on factor j. a comparative analysis of the pair of the i -th and j -th factor by the k-th expert is marked by dije, where , , e e e e ij ij ij ijd t i f ,  1,..., ; 1,...,i n j n  represents a neutrophic number that is being compared in the pairs of factors. the value of each liu et al./decis. mak. appl. manag. eng. 1 (2) (2018) 121-130 124 pair dije takes the values from a previously defined single valued neutrosophic linguistic scale. the response of the e-th expert is displayed by a single valued neutrosophic matrix of , , e e e e e ij ij ij ij n n n n d d t i f     ,  1 e m  rank, where m represents the total number of experts. 12 12 12 1 1 1 21 21 21 2 2 2 1 1 1 2 2 2 0 , , , , , , 0 , , , , , , 0 e e e e e e n n n e e e e e e n n ne e e e e e e n n n n n n nxn t i f t i f t i f t i f d t i f t i f                 (1) where , , e e e ij ij ijt i f represents single valued neutrosophic linguistic expressions from a predefined linguistic scale which the expert e uses to represent his comparison in the pairs of criteria. thus we get matrices d1, d2, …, dm which represent the matrices of responses from each of the m experts. step 2: determination of weight coefficients of experts. it starts from the assumption that m experts  1 2, ,..., me e e with assigned weight coefficients 1 2{ , ,..., }m   , 0 1, ( 1, 2,..., )e e m   participate in the decision-making process. suppose that: (1) each expert from the group of m has his own weighting coefficient, (2) the weight coefficients of the experts differ in value, and (3) condition 1 1 m e e    is satisfied. then we can present the significance of each expert using linguistic variables from a predefined single valued neutrosophic linguistic scale. if we denote a single valued neutrosophic number with ( ), ( ), ( )e e e ee t x i x f x which evaluates the significance of the e-expert, then the weight coefficient of the e-th expert can be determined using the expression (2), [17]               2 2 2 2 2 2 1 1 1 ( ) ( ) ( ) 3 1 1 ( ) ( ) ( ) 3 e e e e m e e e e t x i x f x t x i x f x                  (2) where 1 1 m e e    ,  1 e m  . step 3: determination of the average responses matrix of the experts. on the basis of individual matrices of the answer of the m experts, we obtain a matrix of aggregated sequences of experts * , , e e e e ij ij ij ij n n n n d d t i f     ,  1 e m  , where  1 1 1 2 2 2, , , , , ,..., , ,e m m mij ij ij ij ij ij ij ij ij ijd t i f t i f t i f represent sequences which describe the relative importance of criterion i in relation to criterion j . using the expression (3), an aggregation of values is made at each position of matrix *d multicriteria model for the selection of the transport service provider: single valued… 125         1 1 1 1 1 1 , , e e e m m mm e e e e ij e ij ij ij ij e e e e d d t i f                         (3) where , ,ij ij ijijd t i f represents aggregated svnn. that is how we obtain an aggregated single valued neutrosophic matrix of the average response of the experts (4) 12 12 12 1 1 1 21 21 21 2 2 2 1 1 1 2 2 2 0 , , , , , , 0 , , , , , , 0 n n n n n n n n n n n n t i f t i f t i f t i f d t i f t i f                 (4) matrix d shows the initial effects that factor j causes, as well as the initial effects that factor j receives from the other factors. the sum of each i-th row of matrix d represents the total direct effects that factor i handed over to the other factors, and the sum of each i j--th column of matrix d represents the total direct effects that factor j receives from the other factors. step 4: determine the svn total relation matrix. using expression (5) we calculate a single valued neutrosophic total relation matrix ( ), ( ), ( )ij ij ij ij n n n n t t t t i t f t     . element ( ), ( ), ( )ij ij ij ijt t t i t f t represents the direct effect of factor i on factor j, while matrix t reflects the overall relationship between each pair of factors. since each single valued neutrosophic number consists of three sequences ( ), ( )ij ijt t i t and ( )ijf t then the svn matrix can be divided into three submatrices, i.e. , , n n d t i f   , where, ij n n t t      , ij n n i i      and ij n n f f      . furthermore,  lim m m t o   ,  lim m m i o   and  lim m m f o   , where 0 represents zero matrix. based on the defined settings, we obtain the svn matrix of total t effects by calculating the following elements             2 1 1 1 2 2 ( ) ( ) ( ) ( ) ( ) ( ) lim lim lim m ij n n m ij n n m m n nm ij m t t t t t t t t i t i i i i i t and f t f f f f f t i i i i i i                                                (5) sub-matrices ( )t t , ( )i t and ( )f t together represent a svn matrix of total impact      , , n n t t t i t f t   . based on expression (5) the svn matrix of total impacts is obtained liu et al./decis. mak. appl. manag. eng. 1 (2) (2018) 121-130 126 11 11 11 12 12 12 1 1 1 21 21 21 22 22 22 2 2 2 1 1 1 2 2 2 ( ), ( ), ( ) ( ), ( ), ( ) ( ), ( ), ( ) ( ), ( ), ( ) ( ), ( ), ( ) ( ), ( ), ( ) ( ), ( ), ( ) ( ), ( ), ( ) ( ), ( ), ( ) n n n n n n n n n n n n nn nn nn t t i t f t t t i t f t t t i t f t t t i t f t t t i t f t t t i t f t t t t i t f t t t i t f t t t i t f t              (6) where ( ), ( ), ( )ij ij ij ijt t t i t f t is a single valued neutrosophic number which expresses indirect effects of factors i on factor j . then matrix t reflects the interdependence of each pair of factors. step 5: calculating the sum of the rows and columns of the total impact t matrix. in the total impact t matrix the sum of rows and that of columns is represented by vectors r and c of n×1: 1 1 1 1 ( ), ( ), ( ) n n i ij ij ij ij j j n n r t t t i t f t                        (7) 1 11 1 ( ), ( ), ( ) n n i ij ij ij ij i in n c t t t i t f t                      (8) step 6: determination of the weighting coefficients of the criteria. the weighting coefficients of the criteria are determined using the expression                             2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 ( ) ( ) ( ) 3 1 ( ) ( ) ( ) 3 1 ( ) ( ) ( ) 3 1 ( ) ( ) ( ) 3 i i i j i i i i i i i i i t r i r f r w t c i c f c t r i r f r t c i c f c                                     (9) step 7: forming the initial decision matrix (n). as in dematel method, the evaluation of alternatives by the criteria is being done by m experts  1 2, ,..., me e e with assigned weighting coefficients 1 2{ , ,..., }m   , 1 1 m e e    . in order to make a final ranking of alternatives ia a ( 1,2,..,i b ), each expert ee ( 1, 2,...,e m ) evaluates alternatives by a defined set of criteria  1 2, ,... nc c c c . in that way, correspondent initial decision matrix ( )( ) ee ij b n n       is being constructed for each expert where elements of matrix ( )en ( ( )eij ) represent svn numbers from a predefined neutrosophic linguistic scale. final aggregated decision matrix n is obtained by centering matrix elements ( ) ( ) ( ) ( ), ,e e e eij ij ij ijt i f    of matrix ( )e n . that is how we obtain matrix ij b n n       , where elements , ,ij ij ij ijt i f    are obtained by applying the swnswaa operator, the expression (10) multicriteria model for the selection of the transport service provider: single valued… 127       (1) (2) ( ) (1) 1 ( ) ( ) ( ) 1 1 1 ( , ,.., ) 1 1 , , e e e m m ij ij ij ij e ij b m m m e e e ij ij ij b b b svnswaa t i f                           (10) where e is weighting coefficient, 0 1, ( 1, 2,..., )e e m   , 1 1 m e e    . step 8: calculation of the elements of the difficult matrix (d). the elements of difficult matrix , ,ij dij dij dij b n b n d d t i f           are obtained by applying the expression (11)  , , 1 1 , , j j j w w w ij dij dij dij j ij ij ij ij d t i f w t i f          (11) step 9: ranking alternatives. on the basis of the value of criterion functions iq ( 1, 2,...,i b ) ranking of alternatives is carried out. the criteria functions are obtained by applying expression (12), 1 , 1, 2,..., ; 1, 2,..., . n i j j q d i b j n     (12) 4. numerical example the svnn-dematel vko model for selecting providers was tested on a hypothetical example of the selection of five providers of transport services. as a result of the use of the model, the weighting coefficients of the evaluation criteria were determined and the ranking of the transport providers was performed. four experts in the field of transport participated in the testing of the model; they got weighting coefficients assigned by using the expression (2) e1=0.2864, e2=0.2741, e3=0.2170 and e4=0.1673. experts evaluated the criteria using a linguistic scale: very important – vi (0.90,0.10,0.10); important – i (0.75,0.25,0.20); medium – m (0.50,0.50,0.50); unimportant – ui (0.35,0.75,0.80); very unimportant – vu (0.10,0.90,0.90). five criteria were used to evaluate the provider: c1 – reliability, c2 – business excellence, c3 – total cost, c4 – customer service, c5 – green image. expert evaluations of the criteria are shown in table 1. table 1 expert analysis of the criteria criteria c1 c2 c3 c4 c5 c1 0 vi;vi;vi;i i;m;m;i vi;vi;ivi i;i;m;ui c2 i;m;m;i 0 m;m;vi;vi m;m;m;m vi;i;i;vi c3 m;m;m;m m;m;i;i 0 m;i;m;m vi;vi;vi;vi c4 i;i;ivi m;m;m;m m;m;m;m 0 m;m;m;m c5 m;vu;vu;ui i;i;i;i m;m;m;m i;m;m;i 0 by summing up the elements of the total relation matrix (6) by rows, equation (7), and by columns, equation (8), the values of the total direct and indirect effects of criterion j on the other criteria and the other criteria on criterion j are obtained. liu et al./decis. mak. appl. manag. eng. 1 (2) (2018) 121-130 128 these values together with the threshold value (α) of the total relation matrix are used for defining the cause-and-effect relationship diagram. the cause and effect relationship (cer) diagram (fig. 1) is formed to visualize the complicated causal relationship of criteria in a visible structural model. ri+ci ri-ci 0.00 -0.9 0.9 c3 0.20 0.8 c1 c5 c2 c4 figure 1 cerd diagram the elements in matrix t with a value higher than the threshold value α will be identified and mapped on the diagram (fig. 1) where the x-axis denotes (ri+ci), and yaxis denotes (ri-ci). these values will be used for demonstrating the relationship between two factors. in the course of the demonstration, the arrow denoting the cause-effect membership is directed from the element with a value lower than α towards the element characterized by a higher value than α. using the expression (9), we obtain the weight coefficients of the criteria: c1 (0.828,0.156,0.145), c2 (0.606,0.381,0.364), c3 (0.873,0.129,0.147), c4 (0.641,0.372,0.329) and c5 (0.709,0.307,0.318). expert evaluation of providers by the criteria (table 2) was carried out using a linguistic scale: extremely good/high – eg/eh (1,0,0); very very good/high – vvg/vvh (0.9,0.1,0.1); very good/high – vg/vh (0.8,0.15,0.2); good/high – g/h (0.7,0.25,0.3); medium good/high – mg/mh (0.6,0.35,0.4); medium /fair – m/f (0.5,0.5,0.5); medium bad/low – mb/ml (0.4,0.65,0.6); bad/low – b/l (0.3,0.75,0.7); very bad/low – vb/vl (0.2,0.85,0.8). table 2 expert evaluation of providers according to the evaluation criteria alternative/ criteria c1 c2 c3 c4 c5 a1 vg;mg;vg;g g;g;mg;g mg;mg;m;m g;m;mg;m m;mh;vh;m a2 g;vg;mg;mg vg;mg;m;mg vg;g;vg;vg vg;vg;m;g vh;m;h;h a3 m;gmg;m m;vg;g;g m;g;mg;mg mg;mg;mg;mg h;h;m;mh a4 g;mg;g;mg mg;m;vg;m g;mg;g;mg m;mb;mg;vg m;m;mh;h a5 g;g;mg;vg g;g;mg;vg mg;g;vg;g mg;g;vg;g h;vh;vh;vh applying expressions (10) (12) we get the final rank of the provider: a1 (0.622,0.330,0.374)> a2 (0.571,0.384,0.425)> a3> (0.504,0.457,0.497)>a4 (0.499,0.457,0.497)> a5(0.344,0.643,0.637). the ranking of providers was based on the value of score functions of that time ( )is a [15]. multicriteria model for the selection of the transport service provider: single valued… 129 5. conclusion in this paper, a new svnn-dematel multicriteria model of the selection of the transport service provider is presented. this model uses a new neutrosophic number based approach in dealing with uncertainties. since unambiguous and precise determination of the relative importance of the criteria is not necessary, this model uses, in the process of evaluation, neutrophic linguistic expressions. therefore, the areas of possible application of the model are numerous: from logistical problems, problems of industrial management, environmental management, education, and health to various other fields of expertise. also, the model is open for upgrading and expanding by implementing the results of various techniques of group or expert thinking. references atanassov, k. t. & gargov, g. (1989). interval valued intuitionistic fuzzy sets, fuzzy sets and systems, vol. 31, no. 3, pp. 343–349. atanassov, k. t. (1986). intuitionistic fuzzy sets, fuzzy sets and systems, vol. 20, no. 1, pp. 87–96. biswas, p., pramanik, s., & giri, c.b. (2016). topsis method for multi-attribute group decision-making under single-valued neutrosophic environment. neural computing and applications, 27, 727–737. cao, q., & wang, q. (2007). optimizing vendor selection in a two-stage outsourcing process, computers & operations research, 34, 3757–3768. deli, i., & şubaş, y. (2017). a ranking method of single valued neutrosophic numbers and its applications to multi-attribute decision-making problems. international journal of machine learning and cybernetics, 8, 1309–1322. doi: 10.1007/s13042016-0505-3. gigović, lj., pamučar, d., bajić, z., & milićević, m. (2016). the combination of expert judgment and gis-mairca analysis for the selection of sites for ammunition depot. sustainability, 8(4), 1-30. hsu, c.-w., kuo, t.-c., chen, s.-h., & hu, a. h. (2011). using dematel to develop a carbon management model of supplier selection in green supply chain management, journal of cleaner production. monczka, r. m., trent, r. j., & handfield, r. b. (2005). purchasing and supply chain management, 3rd edition, south-western, cengage learning. nobar, m. n., setak, m., & tafti, a. f. (2011). selecting suppliers considering features of 2nd layer suppliers by utilizing fanp procedure, international journal of business and management, 6(2), 265–275. ordoobadi, s. m., & wang, s. (2011). a multiple perspectives approach to supplier selection, industrial management and data systems, 111(4), 629–648. pamučar, d., & ćirović,g. (2015). the selection of transport and handling resources in logistics centres using multi-attributive border approximation area comparison (mabac). expert systems with applications, 42, 30163028. liu et al./decis. mak. appl. manag. eng. 1 (2) (2018) 121-130 130 sanayei, a., mousavi, s. f., & yazdankhah, a. (2010). group decision-making process for supplier selection with vikor under fuzzy environment, expert systems with applications, 37, 24–30. senthil, s., srirangacharyulu, b., & ramesh, a. (2014). a robust hybrid multi-criteria decision-making methodology for contractor evaluation and selection in third-party reverse logistics, expert systems with applications, 41, 50–58. shen, c., & yu, k. (2012). an integrated fuzzy strategic supplier selection approach for considering the supplier integration spectrum, international journal of production research, 50(3), 817–829. singh, r., & sharma, s.k. (2011). supplier selection: fuzzy-ahp approach, international journal of engineering science and technology, 3(10), 7426-7431. smarandache, f. (1999). a unifying field in logics. neutrosophy: neutrosophic probability, set and logic, american research press, rehoboth. snir, e. m., & hitt, l. m. (2004). vendor screening in information technology contracting with a pilot project, journal of organizational computing and electronic commerce,14(1), 61–88. vinodh, s., anesh ramiya, r., & gautham, s.g. (2011). application of fuzzy analytic network process for supplier selection in a manufacturing organisation, expert systems with applications, 38, 272–280. wang, h., smarandache, f., zhang, y. q., & sunderraman, r., (2010). single valued neutrosophic sets, multispace and multistructure (4) 410-413. zadeh, l.a. (1965). fuzzy sets, information and control, 8(3), pp. 338-353. zouggari, a., & benyoucef, l. (2011). simulation based fuzzy topsis approach for group multi-criteria supplier selection problem, engineering applications of artificial intelligence, doi:10.1016/j.engappai.2011.10.012. © 2018 by the authors. submitted for possible open access publication under the terms and conditions of the creative commons attribution (cc by) license (http://creativecommons.org/licenses/by/4.0/). plane thermoelastic waves in infinite half-space caused decision making: applications in management and engineering vol. 3, issue 2, 2020, pp. 149-161. issn: 2560-6018 eissn: 2620-0104 doi: https://doi.org/10.31181/dmame2003149z * corresponding author. e-mail addresses: zizovic@gmail.com (m. žižović), bole@ravangrad.net (b. miljković), dragan.marinkovic@tu-berlin.de (d. marinković) objective methods for determining criteria weight coefficients: a modification of the critic method mališa žižović1*, boža miljković1 and dragan marinković1 1 university of kragujevac, faculty of technical sciences in cacak, cacak, serbia 2 university of novi sad, faculty of education sombor, novi sad, serbia 3 technische universität berlin, faculty of mechanical and transport systems, berlin, germany received: 25 july 2020; accepted: 30 september 2020; available online: 10 october 2020. original scientific paper abstract: determining criteria weight coefficients is a crucial step in multicriteria decision making models. therefore, this problem is given great attention in literature. this paper presents a new approach in modifying the criteria importance through intercreteria correlation (critic) method, which falls under objective methods for determining criteria weight coefficients. modifying the critic method (critic-m) entails changing the element normalization process of the initial decision matrix and changing data aggregation from the normalized decision matrix. by introducing a new normalization process, we achieve smaller deviations between normalized elements, which in turn causes lower values of standard deviation. thus, the relationships between data in the initial decision matrix are presented in a more objective way. by introducing a new process of aggregation of weight coefficient values in the critic-m method, a more comprehensive understanding of data in the initial decision matrix is made possible, leading to more objective values of weight coefficients. the presented critic-m method has been tested in two examples, followed by a discussion of results via comparison to the classic critic method. key words: critic, criteria weights, multi-criteria decision making. 1. introduction determining criteria weight is one of the key problems of multi-criteria analysis models. methodologies for determining criteria weight have been the topic of intensive research and scientific discussions for many years. generally, most mailto:zizovic@gmail.com mailto:bole@ravangrad.net mailto:dragan.marinkovic@tu-berlin.de žižović et al./decis. mak. appl. manag. eng. 3 (2) (2020) 149-161 150 approaches to determining weight criteria can be divided into subjective and objective. subjective approaches are based on determining criteria weight using information from decision makers or experts included in the decision process. subjective approaches reflect the subjective opinion and intuition of decision makers which means that decision makers influence the decision making process. contrary to subjective approaches, objective approaches are based on determining criteria weight using data that is present in the initial decision matrix. objective approaches disregard the opinion of decision makers. with the subjective approach, the decision maker or expert gives their opinion on the significance of criteria for a given process in accordance with their preferences. there are multiple ways of determining criteria weights using a subjective approach and they differ in the number of participants in the process of determining weights, applied methods and the way of forming final criteria weights. subjective models used for aggregating partial values include: trade-off method (keeney and raiffa 1976); swing method (von winterfeldt and edwards 1986); smart method (the simple multi-attribute rating technique) (edwards and barron, 1994); the new version of smart method: smarter (smart exploiting ranks) developed by edwards and barron (1994). smarter uses the centroid method for determining criteria weight coefficients. apart from the listed subjective approaches for determining criteria weights, there are also approaches based exclusively on pairwise comparisons. these approaches are called pairwise comparison methods. the pairwise comparison method was developed by thurstone (1927) and it requires that comparisons be made by one or a team of experts. the pairwise comparison method is used for presenting relative significance of m alternatives in situations where it is not possible or meaningful to grade alternatives based on criteria. in pairwise methods, one or a team of experts compare an alternative to other alternatives from a set, in relation to a considered criterion. one of the best known methods for determining criteria weights using pairwise comparison is the analytical hierarchy processes (ahp) method (saaty, 1980). the ahp method is based on mutual comparison of criteria significance using saaty’s nine level scale. apart from ahp, other pairwise comparison methods include: decision-making trial and evaluation laboratory (dematel) method (gabus and fontela, 1972), step-wise weight assessment ratio analysis (swara) method (keršuliene et al., 2010); best worst method (bwm) (rezaei, 2015); full consistency method (fucom) (pamucar et al., 2018); level based weight assessment (lbwa) (žižović and pamučar, 2019); non-decreasing series at criteria significance levels (ndsl) (žižović et al., 2020) resistance to change method (roberts and goodwin, 2002) which contains elements of the swing and pairwise comparison methods. contrary to subjective methods, objective approaches eliminate, in a way, the decision maker, i.e. criteria weights are determined based on criteria values of alternatives. the emphasis is on the analysis of the decision matrix, i.e. values of alternatives are considered in relation to a set of criteria, followed by reaching data about values of criteria weights. the decision matrix allows cross referencing alternatives and criteria based on qualitative and quantitative values of each alternative in relation to each criteria. the best known models include: entropy method (shannon and weaver, 1947), critic method (criteria importance through intercriteria correlation), (diakoulaki, et al, 1995), fanma method, named for its authors (fan, 1996; ma et al,1999) and data envelopment analysis (dea) (charnes et al, 1978). objective methods for determining criteria weight coefficients: a modification of the critic ... 151 the entropy method entails determining objective criteria weights based on shannon’s concept of entropic grading of data in the decision matrix (shannon and weaver, 1947). the method focuses on measuring lack of definition of data in the decision matrix. the entropy method generates the set of weight coefficients based on mutual contrast of individual criteria value alternatives for each criterion and then for all criteria. determining criteria weights using the fanma method is based on using the principle of distance from the ideal point and the so-called early weight normalization (srdjevic et al., 2003). objective determination of criteria weights using the dea method (charnes et al, 1978) is based on solving linear optimisation models for alternatives and measuring efficiency of each alternative in relation to defined criteria. criteria are categorized as input and output criteria. then, a number of linear models equal to the number of options is solved. dea objectively ranks options which is the end goal of a multi-criteria analysis, and features groups of criteria weight values for all options as a step to reaching the end goal. the critic method is part of the best known and most widely used objective methods. the critic method is a correlation method, which uses standard deviation of ranked criteria values of options per column, as well as correlation coefficients of all paired columns to determine criteria contrasts. this paper identifies certain limitations when applying the classic critic method and suggests a modification of the critic method (critic-m) that entails: 1) changing the normalization process of the initial matrix elements and 2) changing the function for aggregating data that represents values of weight coefficients. the presented modifications to the critic method are aimed at reaching more objective values of weight coefficients. the remainder of the paper is organized as follows. in the next section (section 2) is presented the mathematical basis of the classic critic method while sections 3 shows the motivation for developing the critic-m method and the steps of the developed methodology. in the fourth section of the paper, we present the application of the critic-m method on two examples and compare the results with the classic critic method. final observations and the direction of future research are presented in section 5. 2. the critic method the critic method (criteria importance through intercriteria correlation), (diakoulaki, et al, 1995) is a correlation method. standard deviations of ranked criteria values of options in columns, as well as correlation coefficients of all paired columns are used to determine criteria contrasts. step 1: starting from an initial decision matrix, ij m n x       , we normalize the element of the initial decision matrix and form the normalized matrix ij m n x       . 1 2 11 12 11 2 21 22 2 1 2 n n n m m m mn m n c c c a a x a                          (1) the normalization of matrix elements ij m n x       is done by applying (2) and (3): žižović et al./decis. mak. appl. manag. eng. 3 (2) (2020) 149-161 152 a) for maximizing criteria: min max min , 1, 2,..., ; 1, 2,..., ; ij j ij j j i n j m           (2) b) for minimizing criteria: max max min , 1, 2,..., ; 1, 2,..., ; j ij ij j j i n j m           (3) where    max min1 2 1 2max , ,..., ; min , ,...,j j j mj j j j mj jj          . upon normalizing criteria of the initial decision matrix, all elements ij  are reduced to interval values [0, 1], so it can be said that all criteria have the same metrics. step 2: for criterion j c  1, 2,...,j n we define the standard deviation j , that represents the measure of deviation of values of alternatives for the given criterion of average value. standard deviation of a given criterion is the measure considered in the further process of defining criteria weight coefficients. step 3: from the normalized matrix ij m n x       we separate the vector  1 2, ,. .., j j j mj    that contains the values of alternatives  1, 2,..,ia i m for the given criterion j c  1, 2,...,j n . after forming the vector  1 2, ,. .., j j j mj    , we construct the matrix jk n n l l      , that contains coefficients of linear correlation of vectors j  and k  . the bigger the discrepancy between criteria values of options for criteria j and k , the lower the value of coefficient jkl . in that sense, the expression (4) represents the measure of conflict of criterion j in relation to other criteria in the given decision matrix. 1 (1 ) n j jk k l    (4) the quantity of data j w contained within criterion j is determined by combining previously listed measures j  and jk l as follows: 1 (1 ) n j j j j kj k w l        (5) based on the previous analysis we can conclude da a higher value j w means a larger quantity of data received from a given criterion, which in turn increases the relative significance of the given criterion for the given decision process. step 4: objective weights of criteria are reached by normalizing measures j w : 1 j j m k k w w w    (6) diakoulaki et al. (1995) and deng et al. (2000) recommend determining criteria weights based on values of standard vector deviation, expression (7): objective methods for determining criteria weight coefficients: a modification of the critic ... 153 1 j j m k k w      (7) where j  stands for standard deviation defined in step 2.. 3. modification of critic method: critic-m method the modification of the critic method presented in this section of the paper is based on two assumptions: 1) modification of normalizing data in the initial decision matrix and 2) modification of expressions for determining final values of criteria weights. 1) motivation for modifying the normalization of data in the initial decision matrix. in the original critic method we apply linear normalization that entails that each column of a normalized matrix contains at least one element with values 0 and 1. an exception would only be a column in which all values are the same (which rarely happens), in which case this criterion has no influence on the final decision. distribution of normalized values in the interval [0, 1] increases root-mean-square deviations, which in turn significantly influences values of criteria weight coefficients. if the standard deviation is close to zero for a certain criterion, then all elements regarding that criterion are centred around the average value of the element as per this criterion. in this situation, all values regarding this criterion are approximately equal so this criterion does not influence choice. in the modified critic method, normalization of the elements in the initial decision matrix entails dividing all the elements of the initial decision matrix with the maximum value in that column, expression (8). max , 1, 2,..., ; 1, 2,..., ; ij ij j i n j m       (8) where  max 1 2max , ,...,j j j mj j     . by applying expression (8) we normalize maximized criteria in the initial decision matrix. normalization of the minimized criteria is done in two steps. in the first step, values are normalized as with maximized criteria, i.e. by applying expression (8). in this way, we arrive at values * ij  . in the second phase, we normalize values by applying expression (9). * * max * min ; 1, 2,..., ; 1, 2,..., ; ij ij j j i n j m         (9) where     * * *max * * *min * * 1 12 2 max , ,..., ; min , ,..., j jj j mj j j mj jj          . this normalization process decreases the root-mean-square deviation and resulting values of criteria weight coefficients better reflect the relationship between data in the initial decision matrix. 2) motivation for modifying the expression for determining final criteria weight values. if the standard deviation is close to zero for a certain criterion, then all the elements regarding that criterion are centred around the average value of elements for this criterion. therefore, all the values for this criterion are approximately equal and this criterion does not influence choice. keeping this in mind, we adjust the expression for determining objective criteria weight values, expression (10) žižović et al./decis. mak. appl. manag. eng. 3 (2) (2020) 149-161 154 1 1 1 j j j j n j j j j w                  (10) where j  stands for the arithmetic average of elements of the normalized decision matrix as per criterion j, i.e. 1 1 n j ij im      . expression (10) represents an extension of expression (7) by introducing average values which favores criteria with average values closer to the ideal value, i.e. closer to one. this means that regarding this criterion, many alternatives have maximum values. in this way, we introduce a certain amount of subjectivity in the objective methodology of the critic method. the following section presents the steps of the modified critic (critic-m) method: step 1: starting with the initial decision matrix, ij m n x       , we normalize the elements of the initial decision matrix and form a normalized matrix ij m n x       . normalization of the elements of the matrix ij m n x       is done by applying expressions (8) and (9). maximized criteria (higher values is better) are normalized by applying expression (8), while minimized criteria (lower values are better) are normalized by applying expression (9). step 2: calculation of standard deviation of elements of the normalized matrix ij m n x       . as with the classic critic method, for each criterion j c  1, 2,...,j n we define standard deviation j  . step 3: constructing the matrix of linear correlations jk n n l l      . for each criterion j c from the normalized matrix ij m n x       we define the vector  1 2, ,. .., j j j mj    and calculate linear vector correlations j  and k  . summing linear correlations per criteria results in measure of criteria conflict: 1 (1 ) n j jk k l    (11) quantity of data j w in the criterion j is determined by applying expression (12): 1 (1 ) n j j kj k w l    (12) step 4: determining weight coefficients of criteria. objective weights of criteria are reached by applying expression (13) 1 1 1 j j j j n j j j j w w w                (13) weights of criteria can be determined based on values of standard vector deviation, expression (14): objective methods for determining criteria weight coefficients: a modification of the critic ... 155 1 1 1 j j j j n j j j j w                  (14) where j  stands for standard deviation. 4. determining criteria weights using the critic-m method example 1: the following section demonstrates the application of the critic-m method on an example that considers the evaluation of five alternatives ( 1, 2,...,5) i a i  in relation to four criteria ( 1, 2,..., 4) j c j  . all criteria in the initial decision matrix are maximized (max). the initial decision matrix ( ij m n x       , 1,2,...,i m , 1,2,...,j n ) is presented using expression (15). 1 2 3 4 8 4 10 2 7 6 4 6 5 5 6 7 6 6 7 8 1 2 3 4 5 65 7 6 c c c c a a x a a a                  (15) in the following section we present the application of the critic-m method in steps defined in the previous section of the paper: step 1: normalization of the initial decision matrix (15). since all criteria are maximized, we used expression (8) for normalizing elements. the normalized matrix is presented using expression (16). 1 2 3 4 1.000 0.571 1.000 0.250 0.875 0.857 0.400 0.750 0.625 0.714 0.600 0.875 0. 750 0.857 0.700 1.000 0.625 1.5 000 0.60 0 0.7 50 max max ma 1 2 3 4 x c a a x a a a c c c                  max (16) normalization of elements a1-c2 in matrix (16) was done in the following way: 12 12 max 2 4 0.571 7       ;   2 max 2 max 4, 6,5, 6, 7 7 c    . normalization of the remaining elements of matrix (16) was done in a similar way. step 2: calculation of standard deviation of elements of normalized matrix (16). we arrive at standard deviation for criteria  0.1630, 0.1629, 0.2191, 0.2850j  . step 3: matrix of linear correlation 4 4 jk l l      is presented using expression (17). 1 2 3 4 1.000 0.605 0.473 0.740 00.605 1.000 .681 0.635 0.473 0.681 1.000 0.671 0.740 0.635 0.67 0 1 3 1 1.00 2 4 c c c c c c l c c                   (17) žižović et al./decis. mak. appl. manag. eng. 3 (2) (2020) 149-161 156 by applying expression (11) and matrix (17), we arrive at the measure of criteria conflict:  3.873, 3.651, 3.878, 3.776j  element 1  for criterion 1 c is reached in the following way: 1 (1 1) (1 0.605) (1 0.473) (1 0.740) 3.873          . remaining values j  we reach in a similar way. by applying expression (12) we define quantity of data j w :  0.6312, 0.5947, 0.8497, 1.0763jw  quantity of data j w for criterion 1 c is reached in the following way: 1 1 1 0.1630 3.873 0.6312w       . remaining values j w are calculated in a similar way. step 4: determining objective values of criteria weights. by applying expression (13) we arrive at criteria weight coefficients  0.2405, 0.2632, 0.1825, 0.3139jw  . the value of criteria weight coefficient 1 w is reached in the following way: 1 0.775 0.6312 1 0.775 0.2405 0.775 0.800 0.660 0.725 0.631 0.594 0.849 1.076 1 0.775 1 0.800 1 0.660 1 0.725 w                                     in a similar way we arrive at the remaining criteria weight values. criteria weights can also be calculated by applying expression (14), i,e, based on the standard deviation j  . by applying expression (14) we arrive at weight coefficients:  0.2349, 0.2726, 0.1780, 0.3145jw  by applying expression (14), we arrive at the value of the weight coefficient of criterion 1 w in the following way: 1 0.775 0.1630 1 0.775 0.2349 0.775 0.800 0.660 0.725 0.1630 0.1629 0.2191 0.2850 1 0.775 1 0.800 1 0.660 1 0.725 w                                     the values of weight coefficients of the remaining criteria we reach in a similar way. example 2: in the following section, we present the application of critic-m method on an example that considers the evaluation of six alternatives ( 1, 2,...,6) i a i  in relation to three criteria ( 1, 2,3) j c j  . criteria c1 and c3 are maximized (max), while criterion c2 is minimized (min). the initial decision matrix ( 6 3 ij x       , 1,2,...,6i  , 1, 2,3j  ) is presented using the expression (18). 1 2 3 15 525 7 30 400 5 0 0 1 2 3 4 50 210 8 3 350 5 3 400 15 6 20 350 3 a a a x a c a c a c                     (18) objective methods for determining criteria weight coefficients: a modification of the critic ... 157 application of critic-m method on example 2 is presented in the following section: step 1: normalization of elements of matrix (18) is done by applying expressions (8) and (9). the normalized matrix is presented using the expression (19). x 1 2 3 0.300 0.400 0.875 0.600 0.638 0.625 1.000 1.000 1.000 0.600 0.733 0.625 0.600 0.638 0.125 0.400 0.7336 1 2 3 4 5 min m 0.375 x ama a a a x a a a c c c                     (19) normalization of elements a1-c1 in matrix (19) is done by applying the expression (8): 11 11 max 1 15 0.300 50       ;   1 max 1 max 15,30,50330,30, 20 50 c    . normalization of elements a1-c2 in matrix (19) is done by applying expression (9): * * max * min 12 12 2 2 1.00 1.00 0.400 0.400            , where   2 * max 2 1.000, 0.762, 0.4 000, 0.667, 10.762, 0 0.max 667 . c    ;   2 * min 2 1.000, 0.762, 0.40 00, 0.667, 0 ..762, 00.6min 67 0 4 c    . normalization of the remaining elements of matrix (19) was done in a similar way. step 2: from the normalized matrix (19) we get standard deviations for criteria ( 1, 2,3) j c j  :  0.240, 0.195, 0.320j  . step 3: matrix of linear correlations 4 4 jk l l      is presented using expression (20). 1 2 3 1.000 0.8 2 6 66 0.3 0 0.86 1.000 0 0.189 .3 9 1 20 0.18 1 2 .0003 c c c c l c c           (20) by applying expression (11) and matrix (20), we get the measure of criteria conflict  0.814, 0.945, 1.491j  , while by applying expression (12) we define the quantity of data  0.196, 0.184, 0.478jw  . step 4: determining objective values of criteria weight coefficients. by applying expressions (13) and (14) respectively, we reach criteria weight coefficients:  0.2670, 0.3447, 0.3883jw  and  0.1937, 0.2903, 0.5160jw  . table 1 presents criteria weight coefficients reached using the classic critic method and the critic-m method in examples 1 and 2. žižović et al./decis. mak. appl. manag. eng. 3 (2) (2020) 149-161 158 table 1. criteria weight coefficients by applying critic and critic-m criteria critic critic-m, expression (13) critic-m, expression (14) example 1 c1 0.2221 0.2405 0.2349 c2 0.3994 0.2632 0.2726 c3 0.1979 0.1825 0.1780 c4 0.1805 0.3139 0.3145 example 2 c1 0.2468 0.1937 0.2670 c2 0.2708 0.2903 0.3447 c3 0.4824 0.5160 0.3883 table 1 presents two groups of data reached using the critic-m method. the first group of data was reached using expression (13), while the second group of data was reached using expression (14). based on data from table 1 we note a very small difference between application of critic-m, expressions (13) and (14). we note that determining conflict between criteria through coefficients of linear correlation, expression (13), does not identify significant differences that influence final values of criteria weights. however, the calculation of linear correlation matrix elements and the introduction of that data to the calculation of criteria weights significantly complicates the calculation of criteria weight coefficients. therefore, we recommend the application of standard deviation (expression (14)) for calculating criteria weights, because it presents quite well the relationships between criteria in the initial decision matrix. by comparing weight coefficients reached by using critic and critic-m methods we note that there are significant differences between resulting values. differences in the weights are due to 1) different way of data normalization (critic linear normalization and critic-m percentual normalization) and 2) application of different aggregation functions used for final values of criteria weight coefficients. applying linear normalization in the critic method results in higher values of standard deviation because normalization distributes all values in the interval [0, 1]. on the other hand, by applying percentual normalization in the critic-m method, all normalized values are distributed in the interval min max ,1 j j          . this shifts the distribution of all values towards the ideal value, i.e. towards one. as a consequence, standard deviation values are lower. both examples in this paper show that criteria weight coefficients centre around average values. also, we can point out that the critic-m method contributes to a better objectivity of results. this can be noted in the second example and criteria c1 and c2. by applying the classic critic method, there are very small differences between weight coefficients of criteria c1 and c2. on the other hand, by applying the critic-m method, the differences between these criteria are clearly marked. further, in the critic-m method, the function for aggregating values of weight coefficients has been changed by introducing average values. the reason for introducing average values and presenting their influence on criteria weights is favoring criteria whose average values are closer to the ideal value. by introducing this type of subjectivity to the critic-m method, we eliminate one of the bad characteristics of the classic critic method: assigning low values of criteria weight coefficients to criteria that, for most alternatives, have values close to the ideal value. objective methods for determining criteria weight coefficients: a modification of the critic ... 159 5. conclusion weight coefficients are a calibration tool for decision models and the quality of their definition directly influences the quality of the decision. the reason for studying this problem lies in the fact that each of the subjective and objective methods for determining criteria weights has its advantages and flaws. this paper considers certain limitations of the critic method and puts forward a modification with its new critic-m algorithm. the modification of the critic method presented in this paper is based on a new approach to normalization of values in the initial decision matrix and on a new approach to aggregation of data from the initial decision matrix. the new normalization process in the critic method makes it possible to reach lower standard deviation values for normalized values, which contributes to more objective representation of relationships between data in the initial decision matrix. apart from modifying the critic method using a new normalization process, we also present a new approach for aggregating values of weight coefficients. aggregation of weight coefficient values in the critic-m entails average values of normalized elements. introduction of average values aims to favor criteria per which alternatives have values close to ideal values. although this approach introduces a certain degree of subjectivity to this critic methodology, authors maintain that this approach enables a more comprehensive understanding of data in the initial decision matrix and a more objective set of weight coefficient values. it is clear that values reached by using objective and subjective methods can lead to completely different results, i.e. to completely different weight coefficient values. keeping this in mind, objective methods for determining criteria weights can be used to correct criteria weights determined using subjective methods or based on subjective preferences of decision makers. therefore, the presented critic-m methodology can be a useful tool for correcting criteria weights. further, future research can be directed towards defining absolute, ideal and anti-ideal values in the initial decision matrix. this would eliminate rank reversal problems in the case of adding new alternatives to the initial decision matrix and reduce its indirect influence on significant changes to criteria weights. also, future research should also be directed towards application of uncertainty theories in the critic-m method, such as fuzzy theory. this is supported by the significant position of fuzzy theory in the field of multi-criteria decision making, and as far as the authors are aware, there has, so far, been no presentation of expanded critic methods in fuzzy environments. author contributions: each author has participated and contributed sufficiently to take public responsibility for appropriate portions of the content. funding: this research received no external funding. conflicts of interest: the authors declare no conflicts of interest. references charnes, a., cooper w.w., & rhodes e. (1978). measuring the efficiency of decision making units. european journal of operations research, 2(6), 429-444. žižović et al./decis. mak. appl. manag. eng. 3 (2) (2020) 149-161 160 deng, h., yeh, c.h., & willis r.j. (2000). inter-company comparison using modified topsis with objective weights. computers and operations research, 27, 963-973. diakoulaki, d., mavrotas, g., & papayannakis, l. (1995). determining objective weights in multiple criteria problems: the critic method. computers and operations research, 22, 763-770. diakoulaki, d., mavrotas, g., & papayannakis, l. (1995). determining objective weights in multiple criteria problems: the critic method. computers and operations research, 22, 763-770. edwards, w., & barron, h. (1994). smarts and smarter: improved simple methods for multiattribute utility measurement. organizational behavior and human decision processes, 60(3), 306-325. fan, z.-p. (1996). complicated multiple attribute decision making: theory and applications, ph.d. dissertation. northeastern university, shenyang, prc. gabus, a., & fontela, e. (1972). world problems an invitation to further thought within the framework of dematel. battelle geneva research centre, switzerland, geneva, 1-8. keeney, r. l., & raiffa, h. (1976). decisions with multiple objectives. wiley, new york. keršuliene, v., zavadskas, e. k., & turskis, z. (2010). selection of rational dispute resolution method by applying new step‐wise weight assessment ratio analysis (swara). journal of business economics and management, 11(2), 243-258. ma, j., fan, z.-p., & huang, l.-h. (1999). a subjective and objective integrated approach to determine attribute weights. european journal of operations research, 112, 397-404. pamučar d, stević, ž., & sremac, s. (2018). a new model for determining weight coefficients of criteria in mcdm models: full consistency method (fucom). symmetry, 10(9), 393. roberts, r., & goodwin, p. (2002). weight approximations in multi-attribute decision models, journal of multicriteria decision analysis, 11, 291-303. saaty t.l. (1980). analytic hierarchy process. mcgraw-hill, new york. shannon, c.e., & weaver, w. (1947). the mathematical theory of communication. urbana: the university of illinois press. srdjevic, b., medeiros, y.d.p., faria, a.s., & schaer m. (2003). objektivno vrednovanje kriterijuma performanse sistema akumulacija. vodoprivreda, 35, 163-176, (only in serbian). thurstone, ll. (1927). a law of comparative judgment. psychological review, 34, 273. von winterfeldt, d., & edwards, w. (1986). decision analysis and behavioral research, cambridge university press. žižović, m., & pamucar, d. (2019). new model for determining criteria weights: level based weight assessment (lbwa) model. decision making: applications in management and engineering, 2(2), 126-137. objective methods for determining criteria weight coefficients: a modification of the critic ... 161 žižović, m., pamučar, d., ćirović, g., žižović, m.m., & miljković, b. (2020). a model for determining weight coefficients by forming a non-decreasing series at criteria significance levels (ndsl). mathematics, 8(5), 745. © 2018 by the authors. submitted for possible open access publication under the terms and conditions of the creative commons attribution (cc by) license (http://creativecommons.org/licenses/by/4.0/). plane thermoelastic waves in infinite half-space caused decision making: applications in management and engineering vol. 4, issue 1, 2021, pp. 1-18. issn: 2560-6018 eissn: 2620-0104 doi: https://doi.org/10.31181/dmame2104001d * corresponding author. e-mail addresses: i.naric@yahoo.com (i. djalic), terzic_svetlana@yahoo.com (s. terzic) violation of the assumption of homoscedasticity and detection of heteroscedasticity irena djalic1* and svetlana terzic1 1 university of east sarajevo, faculty of transport and traffic engineering, doboj, republic of srpska, bosnia and herzegovina received: 1 september 2020; accepted: 20 october 2020; available online: 24 october 2020. original scientific paper abstract: in this paper, it is assumed that there is a violation of homoskedasticity in a certain classical linear regression model, and we have checked this with certain methods. model refers to the dependence of savings on income. proof of the hypothesis was performed by data simulation. the aim of this paper is to develop a methodology for testing a certain model for the presence of heteroskedasticity. we used the graphical method in combination with 4 tests (goldfeld-quantum, glejser, white and breusch-pagan). the methodology that was used in this paper showed that the assumption of homoskedasticity was violated and it showed existence of heteroskedasticity. key words: economic phenomena; heteroskedasticity; homoskedasticity; random errors. 1. introduction econometrics is a discipline that determines the connection between economic phenomena and confirms or does not confirm economic theory, starting from mathematical equations and forming econometric models suitable for testing. regression analysis is one of the most commonly used tool in econometrics to describe the relationships between economic phenomena. one of the classic assumptions of linear regression is homoskedasticity. homoskedasticity implies that the variance of random error is constant and equal for all observations. when the random errors of the classical linear regression model are not homoskedastic, then they are heteroskedastic (mladenović & petrović, 2017). the main goal of the paper is to show how the linear regression model behaves in conditions of violating the assumption of homoskedasticity and how this violation is detected. the basic contribution of the paper is that in one place it gives a developed method of detecting violating of homoskedasticity, ie the existence of mailto:i.naric@yahoo.com mailto:terzic_svetlana@yahoo.com djalic and terzic/decis. mak. appl. manag. eng. 4 (1) (2021) 1-18 2 homoskedasticity in linear regression models. this paper presents a methodology for detecting heteroskedasticity in linear regression models by a combination of a graphical method and four tests. after the introduction, a review of the literature was performed, after which the basics of heteroskedasticity were presented. in this part of the paper, the goldfeldquantum, glejser, white and breusch-pagan tests are presented. at the end of the paper, concluding remarks were made and recommendations for further research were given. 2. literature review aue et al. (2017) state that heteroskedasticity is a common characteristic of financial time series and most often refers to the process of model development using autoregressive conditional heteroskedastic and generalized autoregressive conditional heteroskedastic processes. ferman & pinto (2019) formed a model of inference that works with adjusting differences in differences with several treated and many controlled groups in the presence of heteroskedasticity. charpentier et al. (2019) developed the gini-white test, which shows greater strength in solving the problem of heteroskedasticity than the ordinary white test in cases when external observations affect the data. moussa (2019) analyzes cases in which heteroskedasticity is the result of individual effects or idiosyncratic errors, or both. linton & xiao (2019) study the effective estimation of nonparametric regression in the presence of heteroskedasticity and conclude that in many popular nonparametric regression models their method has a lower asymptotic variance than the usual unweighted procedures. a large number of authors pay attention to heteroskedasticity and develop models for solving certain problems (baum & schaffer, 2019; brüggemann et al., 2016; lütkepohl & netšunajev, 2017; cattaneo et al., 2018; ou et al., 2016; sato & matsuda, 2017). taşpınar et al. (2019) investigate the properties of finite samples of the heteroskedasticity-robust generalized method of moments estimator (rgmme), ie develop a robust spatial econometric model with an unknown form of heteroskedasticity. crudu et al. (2017) propose a new inference procedures for models of instrumental variables in the presence of many, potentially weak instruments that are robust to the presence of heteroskedasticity. lütkepohl & velinov (2016) compare models of long-term restriction that are widely used to identify structural shocks in vector autoregressive (var) analysis based on heteroskedasticity. harris & kew (2017) test adaptive hypotheses for a fractional differential parameter in a parametric arfima model with unconditional heteroskedasticity of unknown shape. in the case of heteroskedasticity, there are occasionally precise theoretical reasons for assuming that the errors have different variances for different values of the independent variable. very often, arguments for the presence of heteroskedasticity are so well defined, and sometimes there is a vague suspicion that the assumption of homoskedasticity is too strong (barreto & howland, 2006). it is important to note that heteroskedasticity is a common occurrence in spatial samples due to the nature of collection of data. obvious sources of heteroskedasticity are associated with different dimensions for different regions in the study area, unequal concentrations of population and economic activity in rural and urban areas (arbia, 2006). baum & lewbel (2019) provide advice and guidance to researchers who wish to use tests to check heteroskedasticity. violation of the assumption of homoscedasticity and detection of heteroscedasticity 3 3. methodology the simplest form of linear regression, which shows a linear relationship between two phenomena, is a simple linear regression: y x     (1) 𝜀 is a random error that we make during linear regression, and α and β are unknown parameters. to estimate the unknown parameters, we use a sample. for fixed n values of the independent variable 𝑋 the values of the variable 𝑌 are determined. in this way, n pairs (𝑋1, 𝑌1), (𝑋2, 𝑌2), … , (𝑋n, 𝑌n) are obtained, which forming the model of the simple linear regression sample: ,i i iy x     i = 1,2, …, 𝑛 (2) the assumption of homoskedasticity for the random variable 𝜀i is: 2 2 ( ) ( ) .,i iv e const     for each i = 1,2, …, n (3) when this assumption is violated, that is, when the random errors of the classical linear regression model do not satisfy this characteristic, then they are heteroskedastic. if the assumption of homoskedasticity (jovičić, 2011):   2 2 2 ( ) ( ) ( )i i i iv e e e        , for each 𝑖 (4) is not met, but the variances are different and valid: 2 ( )i iv   i = 1, . . ., 𝑛, (5) respectively (mladenović & petrović, 2017), 2 2 2 1 1 2 2( ) , ( ) ,..., ( ) ,n nv v v        2 1  ≠ 2 2  ≠ … ≠ 2 n  (6) it can be said that the errors are heteroskedastic or there is heteroskedasticity in the model. figure 1 presents a model where heteroskedasticity of the error is assumed. the growth of savings with increasing income is shown, where the variance of savings is smaller with different income levels. the variance is not constant, but increases with the growth of income, which corresponds to real economic relations (mladenović & petrović, 2017). djalic and terzic/decis. mak. appl. manag. eng. 4 (1) (2021) 1-18 4 figure 1. heteroskedastic errors source: mladenović & petrović (2017) heteroskedasticity can also be caused by errors of specification . for example, by omitting an important regressor whose influence will be covered by the error, a different variation of the error for different observations can be obtained. similarly, the wrong functional form of the model can lead to heteroskedasticity of the error. as data collection techniques are advancing, which implies the provision of representative samples for statistical processing, so do errors and thus their dispersions are decreasing. and this may be another reason for the occurrence of heteroskedasticity. 3.1. consequences of heteroskedasticity the presence of heteroskedasticity in the model of dependence of savings on income can be represented on the basis of the following point scatter diagram (figure 2): violation of the assumption of homoscedasticity and detection of heteroscedasticity 5 figure 2. diagram of distribution of points (mladenović & petrović, 2017) estimates of unknown parameters using the ordinary least squares method are determined from the condition that the residual sum of squares, 2ie , is minimal. in that case, all squares of the residuals have the same weight, ie they give the same information when forming the necessary estimates. this condition is not precise enough for the sample presented in figure 2. data that are far from the sampling regression line provide less useful information about its position than those that are closer to it. higher residual values in absolute terms correspond to more distant data. these residues dominate in the total residual sum of squares. therefore, it is realistic to expect that the application of ordinary least squares method does not provide estimates with desirable statistical properties. suppose that in the model (mladenović & petrović, 2017): 0 , i i i y x     (7) there is heteroskedasticity: 2 ( ) ,i iv   i = 1 , 2, …, n (8) the estimate b of the parameter β, obtained using the ordinary least squares method, is unbiased, because the corresponding proof does not use the assumption of the stability of the variance of the random error. to determine the variance of the estimate b we start from the expression: b – β = 1 n i i i w    , (9) 2 1 i i n i i x w x    , (10) djalic and terzic/decis. mak. appl. manag. eng. 4 (1) (2021) 1-18 6 based on which the variance is:                 2 2 2 1 2 2 2 2 2 2 1 1 2 2 1 2 1 2 1 3 1 3 2 2 2 2 2 2 1 1 2 2 1 2 1 2 1 3 1 3 2 2 2 2 n i i i n n n n v b e b e b e b e w e w w w w w w w w e w e w e w w e w w e                                                (11) in the eq. (11), all elements of the form  i je   , i ≠ j are equal to zero. the expression for the variance of the estimate b is:        2 2 2 2 2 21 1 2 2 n nv b w e w e w e     2 2 2 2 2 2 1 1 2 2 n nw w w     . 2 2 1 n i i i w    2 2 2 2 1 2 2 21 1 1 n n i i i i in n i i ii i xx x x                           (12) the variance of the estimate b, in a simple linear regression model, is given by the following expression:   2 2 1 n i i v b x     (13) when the existence of heteroskedasticity is neglected, the estimate of the variance of the estimate b is obtained by the following formula: 2 2 2 1 2 2 1 1 1 2 n i i b n n i i i i es s n x x          (14) when the variance of the random error grows in parallel with the explanatory variable then the estimate 2bs underestimates the actual variance of the estimate b. this arises because the estimate of the random error variance, 2 s , underestimates the actual random error variance of the initial model. thus, the properties of the estimates of parameter obtained by applying the ordinary least squares method in the presence of heteroskedasticity are: 1. the ratings are unbiased, 2. estimates do not have minimal variance, that is, they are ineffective. violation of the assumption of homoscedasticity and detection of heteroscedasticity 7 3. the assessment of the variance of a random error underestimates, in most cases, the actual variance. therefore, the estimate of the variance of the estimate of slope, , also underestimates the variance . 4. confidence intervals and tests based on the assessment of the variance of a random error are unreliable. 3.2. testing of heteroskedasticity the true nature of heteroskedasticity is usually unknown, so the choice of the appropriate test depends on the nature of the data. but as the amount of error variation around the mean value typically depends on the height of the independent variables, all tests rely on examining whether the error variance is some function of the regressor. certain methods for testing the existence of heteroskedasticity are presented below. 3.2.1. graphic method one of the simplest methods for examining the existence of heteroskedasticity consists in visually viewing the residuals of the estimated model. it is common to form point scattering diagram of residual 𝑒𝑖 or their absolute value, ie , and independent variable ix . since the variance of a random error  2ie  , there is an opinion that on the point scattering diagram of residual values should be replaced by their square, 2ie . based on the point scatter diagrams, we can conclude about whether heteroskedasticity exists, and if so, in what form it occurs, ie how the variance of random error is generated. figure 3 presents graphs of some of the possible point scatter diagrams (mladenović, 2011). figure 3. point scatter diagrams (mladenović, 2011) djalic and terzic/decis. mak. appl. manag. eng. 4 (1) (2021) 1-18 8 the first graph corresponds to a model in which there is no systematic dependence between the variances of random errors and the independent variable .ix in such a model, random errors can be considered homoskedastic. other graphs show the regularity in the position of the points on the scatter diagram, suggesting possible heteroskedasticity. the second graph indicates a linear dependence, while the third and fourth graphs represent the dependence expressed in square form, in the sense that the variance of the random error is correlated with 2 ix . graphic methods are only a means of preliminary analysis. in order to get a more precise answer to the question of whether heteroskedasticity is present or not, it is necessary to use appropriate tests. 3.2.2. goldfeld-quandt test one of the earliest, which is very simple and often used is the goldfeld-quandt test (kalina & peštová, 2017). this test tests the null hypothesis of random error constancy versus alternative that the variance of a random error is a linear function of the independent variable. it is assumed that the random error is non-autocorrelated and with normal distribution. the test procedure is as follows (mladenović & petrović, 2017):  observations from the sample are arranged according to the increasing values of the independent variable.  from the set of n observations, c central observations are omitted, so that further analysis is based on two sets of observations: the first 2 n c and the last 2 n c observations where is necessary to ensure that 2 n c k   , and 𝑘 is the number of evaluated parameters.  we individually evaluate the two regressions based on the first 2 n c and the last 2 n c observations. the obtained sums of the residual squares are denoted by 21e and 2 2 2 2 1 1 e e e   ( 2 1e corresponds to the regression with the lower, аnd 2 2 22 1 1 e e e   to the regression with the higher values of the independent variable). the homoskedasticity of a random error implies the same degree of variation in two subsets of observations, which is manifested by approximately the same values of the variable sums 21e and 2 2 2 2 1 1 e e e   . in this case, the quotient of these two sums is close to the value of 1. on the contrary, the existence of heteroskedasticity results in a higher value of the residual sum 2 2 22 1 1 e e e   . the purpose of the test is to check whether 2 2 2 1 e e   is statistically significantly different from 1. assuming that the null hypothesis of constant variance is correct, the following holds: 2 21 22 2 : n c k e x     (15) violation of the assumption of homoscedasticity and detection of heteroscedasticity 9 2 22 22 2 : n c k e x     (16) where the k is the number of parameters for evaluation in the known model. it follows that the observed relationship: 2 2 2 1 e e   has an f – distribution with 2 2 n c k  and 2 2 n c k  degrees of freedom. therefore, the goldfeld-quandt test statistic is in a form: 2 2 2 1 e f e    (17) if the calculated value of f – statistics is higher than the corresponding critical value at a given level of significance, we conclude that there is heteroskedasticity in the model. 3.2.3. glejser test the application of this test does not require a priori knowledge of the nature of heteroskedasticity, but it is reached during the testing. the test procedure is as follows (im, 2000):  the initial regression 0 1 1 i i k ik iy x x       is estimated by the method of ordinary least squares and the residuals ie are calculated .  the next regression is estimated: 0 1 error h i ie x    (18) the values 1, −1, and 1/2 are usually assigned to the parameter h so that regressions are evaluated: 0 1 errori ie x    (19) 0 1 / errori ie x    (20) 0 1 error i i e x    (21)  the statistical significance of the evaluation of the parameter 1 is tested using the t-test.  the coefficients of determination obtained for different values of the parameter h are compared. the statistical significance of the estimate 1 leads to the conclusion that there is heteroskedasticity. the very character of heteroskedasticity is determined according to the regression with the highest value of the coefficient of determination. djalic and terzic/decis. mak. appl. manag. eng. 4 (1) (2021) 1-18 10 3.2.4. white test the test is based on the comparison of the variance of the estimators obtained by the method of ordinary least squares in the conditions of homoskedasticity and heteroskedasticity. if the null hypothesis is correct, the two estimated variances would differ only due to fluctuations in the sample. the null hypothesis about the homoskedasticity of a random error is tested against the widely placed alternative hypothesis that the variance of a random error depends on the explained variables, their squares and intermediates, ie. the variation of the residuals under the combined action of the regressors is examined. the white test consists of the following steps (white, 1980): step 1: the model 0 1 1 i i k ik iy x x       should be estimated to obtain a series of residuals ie ie their squared values. step 2: evaluate the auxiliary regression in which the squares of the residuals of the function of all regressors of the model, their squares and intermediate products, ie apply the method of ordinary least squares on 2 0 1 1 2 2 i i i p ipe z z z error        , i = 1, 2, …, n, where for simple regression 2 1 2 2, i i i ip z x and z x   so the test is based on analysis of model 2 2 0 1 2 i i ie x x error      , i = 1, 2, …, n. the significant influence of the independent variables ix and 2 ix at 2 ie results in a high value of the coefficient of determination 2 r . the significant influence of the independent variables 1ix and 2ix , the specification is 5p  , 21 1 2 2 3 1 2 4 1 , , , i i i i i i i i iz x z x z x x z x    and 2 5 2 i iz x . due to the possible large loss of degrees of freedom, it is possible to use instead of individual values of the regressors, their linear combination: 2, i iy y . step 3: based on the value of the coefficient of determination from the auxiliary regression, 2wr , form the white test 2 wnr , where n is the sample volume. asymptotically, under the null hypothesis of homoskedasticity, the test statistic 2wnr leads to 2 distribution with the number of degrees of freedom equal to the number of regressors in the auxiliary regression: 2 2 ~ w pnr x . step 4: if the calculated value of the test statistics is greater than the tabular value, ie if the coefficient of determination in the auxiliary function of the residual square is high enough, the homoskedasticity hypothesis is rejected. the white test is not sensitive to the deviation of errors from normal and it is simpler, so it is more often used to test the existence of heteroskedasticity. in the case that there are multiple regressors, the introduction of squares and all intermediates in the auxiliary regression can mean a large loss in the number of degrees of freedom. that is why the white test is often performed without intermediates. 3.2.5. breusch-pagan test this test is based on the idea that the estimates of the regression coefficients obtained by the least squares method should not differ significantly from the maximally plausible estimates, if the homoskedasticity hypothesis is true (halunga et al., 2017). the null hypothesis about the homoskedasticity of random error is tested against the broadly set alternative hypothesis about the influence of a number of violation of the assumption of homoscedasticity and detection of heteroscedasticity 11 factors on the variance of random error. for simplicity, assume that test examines the influence of the explanatory variable ix in simple regression. the testing procedure is as follows (mladenović & nojković, 2017): residuals ie are formed from the regression iy at a constant and ix . the average value of the sum of the squares of the residual is determined: 2 2 e sp n   , and then forms a new variable: 2 1 2 ,i e g sp  i = 1, 2, … , n. from regression ig at ix the explained sum of the squares of the dependent variable is obtained  2ˆig . the relationship 2 2 ˆ ig has 2 x distribution with one degree of freedom. the heteroskedasticity hypothesis will be accepted when the value of the calculated ratio 2 2 ˆ ig is greater than the critical value of 2 x distribution with one degree of freedom. 4. application of the model: data simulation table 1 shows the data so as to simulate the next deviation 2 2 0.01i ix  . the population straight line is 2 3y x  , where y is savings and x is income. in line ,iy 1 , , 30i   , there are values y to which errors i have been added. table 1. display of simulated data no. xi y i yi 1 10 32 -0.13677 31.86323 2 10 32 1.045263 33.04526 3 10 32 0.324248 32.32425 4 10 32 -1.80589 30.19411 5 10 32 0.568473 32.56847 6 10 32 -0.17024 31.82976 7 10 32 0.676169 32.67617 8 10 32 -0.57257 31.42743 9 10 32 -1.53944 30.46056 10 10 32 -0.38377 31.61623 11 20 62 -4.85783 57.14217 12 20 62 -1.66701 60.33299 13 20 62 9.513881 71.51388 14 20 62 0.817791 62.81779 15 20 62 -11.1762 50.82381 16 20 62 -6.47024 55.52976 djalic and terzic/decis. mak. appl. manag. eng. 4 (1) (2021) 1-18 12 17 20 62 9.51661 71.51661 18 20 62 2.045394 64.04539 19 20 62 5.286107 67.28611 20 20 62 7.451416 69.45142 21 30 92 19.59112 111.5911 22 30 92 33.2486 125.2486 23 30 92 -23.2211 68.7789 24 30 92 -28.8606 63.13944 25 30 92 38.95497 130.955 26 30 92 -1.97921 90.02079 27 30 92 -36.9439 55.05615 28 30 92 12.09004 104.09 29 30 92 41.06767 133.0677 30 30 92 -38.0374 53.96263 based on the values iy and ix from table 1 evaluate the linear regression model is evaluated: , 2 : (0, 0.01 ), 1, 2,..., 30 i i i i i y x n x i         after evaluation, the following results were obtained (table 2): table 2. coefficients мodel estimated value standard error 𝑝 value �̂� �̂� 1.022 3.090 8.880 0.411 0.909 0.000 after the obtained coefficients, the analysis of model variance was performed (table 3): table 3. analysis of variance sum of quares no. of degrees freedom average value of sum 𝑝 value regressional residual total 19090.319 9462.096 28552.414 1 28 29 19090.319 337.932 0.000 the coefficient of determination was determined, 2 r = 0.669. figure 4 graphically shows the simulation model. violation of the assumption of homoscedasticity and detection of heteroscedasticity 13 figure 4. graphic representation of the population model figure 4 shows the population line 2 3y x  by interrupted line, while the sample line 1.022 .0ˆ 3 90y x  is shown by full line. the graph clearly shows that the scatterings are higher for higher values of the independent variable x and that sample line ŷ slightly deviates from the line y . after the graphical representation of the model, it can be assumed that certain deviations exist, so we will test the heteroskedasticity with the previously described tests. 4.1. graphic method figure 5 in graph (a) clearly shows the relationship between the residuals and the independent variable x (the larger x , the larger residuals), while in diagram (b) the dependence of the squared residuals with respect to x can be seen (the dependence in the square form). figure 5. diagrams of residual scattering djalic and terzic/decis. mak. appl. manag. eng. 4 (1) (2021) 1-18 14 4.2. goldfeld-quandt test after the order of observations in ascending order of magnitude x two models (for the first 15 and last 15 observations) of linear regression i i iy x     are evaluated. the first 15 observations: 2 3.075 2.873 0.92ˆi iy x r   . (3.324) (0.235) the last 15 observations: 2 2ˆ 9.516 2.803 0.22i iy x r   . (39.369) (1.454) the residual sum for the first 15 observations is 18.411, and for the last 15 observations it is 704,495. based on these residuals, the value of the test statistic is: 704.495 38.26 18.411 f   . as the critical value of the 𝐹 distribution with 13 and 13 degrees of freedom and a significance level of 0.05 is 2.58, this test shows that heteroskedasticity is present (the value of the test statistics is higher than the critical value). 4.3. glejser test three linear regression models are being tested: model 1: i ie x error    model 2: /i ie x error    model 3: i ie x error    the results are shown in table 5. тable 5. the results of the glejser test estimated parameters estimated values standard error 𝑝 -value 𝑹𝟐 model 1 �̂� �̂� -15.419 1.335 4.169 0.193 0.001 0.000 0.631 model 2 �̂� �̂� 31.510 -331.137 4.500 66.813 0.000 0.000 0.467 model 3 �̂� �̂� -37.469 11.151 7.804 1.745 0.000 0.000 0.593 the estimated parameters that stand next to the regressors are statistically significant. all parameters are suitable for testing the hypothesis of heteroskedasticity, and based on the coefficient of determination, the first is preferred because it is the largest. this test also shows the presence of heteroskedasticity. violation of the assumption of homoscedasticity and detection of heteroscedasticity 15 4.4. white test auxiliary linear regression was estimated: 2 2 0 1 2 i i i ie x x       and the values are shown in the following table 4: тable 4. coefficients мodel estimated value standard error 𝑝 value �̂� 𝟎 �̂� 𝟏 �̂� 𝟐 766.923 -117.142 4.053 484.031 54.964 1.360 0.125 0.042 0.006 where the coefficient of determination is 2 0.607wr  . it can be observed that the coefficients along with ix and 2 ix are statistically significant while the constant is not. white's test statistic is 2 30 0.607 1 8.21wnr   which is greater than the tabular value of the 2 istribution with two degrees of freedom, 5.991. it is the same conclusion as before, that heteroskedasticity is present. 4.5. breusch-pagan test based on the linear regression equation 1.022 3.090i iy x  the estimated value of the error variance is obtained: 2 9462.10 315.403ˆ 30    the new regression equation: 2 5ˆ 1.8 2 0.143 ip x  (0.609) (0.028) where is: 2 2 2 315.403ˆ i i i e e p    test statistics is: 2 40.660 20.33 2 ˆ 2 ig   the critical value of the 𝜒2 distribution with one degree of freedom and a significance level of 0.05, is 3.841, so it is also concluded that heteroskedasticity is present. djalic and terzic/decis. mak. appl. manag. eng. 4 (1) (2021) 1-18 16 4.6. discussion after testing, it is clear that all four tests show the presence of heteroskedasticity in a given model. the goldfeld-quandt test shows that the f – distribution is equal to 2.58 and it is higher than the corresponding critical value at a given level of significance (0.05). based on this we can conclude that heteroskedasticity is present in the model. in the glejser test the parameter 1 is tested and the coefficients of determination obtained for different values of the parameter h are compared. in this model (table 5) all parameters are suitable for testing the hypothesis of heteroskedasticity, and based on the coefficient of determination, the first is preferred because it is the largest. this test also shows the presence of heteroskedasticity. white test shows that the calculated value (18.21) of the test statistics is greater than the tabular value and we can conclude heteroskedasticity is present. in the breusch-pagan test the value of the calculated ratio is 20.33 and it is greater than the critical value of 2 x distribution that is 3.841 with one degree of freedom, and we also can conclude that heteroskedasticity is present. 5. conclusion one of the classic assumptions of linear regression is homoskedasticity, and when it is disturbed, heteroskedasticity occurs. graphical methods and heteroskedasticity tests are used to detect heteroskedasticity, although it is not possible to say with certainty which test is the best. in this paper, we explained and applied the graphical method and four tests (goldfeld-quantum, glejser, white and breusch-pagan test). through a review of the literature, it can be seen that many authors have addressed this issue and used various tests to detect heteroskedasticity. the tests were applied by data simulation. it can be seen that the graphical method and all four applied tests confirm the presence of heteroskedasticity, so we can conclude that all four tests showed a good result and that it can be confirmed the assumption of the existence of heteroskedasticity in the model. future researchers are left with the question of solving heteroskedasticity, ie the question of removing heteroskedasticity from the model. when eliminating heteroskedasticity, care must be taken which method can be used depending on the form 2 i . author contributions: each author has participated and contributed sufficiently to take public responsibility for appropriate portions of the content. funding: this research received no external funding. conflicts of interest: the authors declare no conflicts of interest. references arbia, g. (2006). spatial econometrics: statistical foundations and applications to regional convergence. springer science & business media. aue, a., horváth, l., & f. pellatt, d. (2017). functional generalized autoregressive conditional heteroskedasticity. journal of time series analysis, 38(1), 3-21. violation of the assumption of homoscedasticity and detection of heteroscedasticity 17 barreto, h., & howland, f. (2006). introductory econometrics: using monte carlo simulation with microsoft excel. cambridge university press. baum, c., & schaffer, m. (2019). ivreg2h: stata module to perform instrumental variables estimation using heteroskedasticity-based instruments. brüggemann, r., jentsch, c., & trenkler, c. (2016). inference in vars with conditional heteroskedasticity of unknown form. journal of econometrics, 191(1), 69-85. cattaneo, m. d., jansson, m., & newey, w. k. (2018). inference in linear regression models with many covariates and heteroskedasticity supplemental appendix. charpentier, a., ka, n., mussard, s., & ndiaye, o. h. (2019). gini regressions and heteroskedasticity. econometrics, 7(1), 4. crudu, f., mellace, g., & sándor, z. (2017). inference in instrumental variables models with heteroskedasticity and many instruments. manuscript, university of siena. ferman, b., & pinto, c. (2019). inference in differences-in-differences with few treated groups and heteroskedasticity. review of economics and statistics, 101(3), 452-467. halunga, a. g., orme, c. d., & yamagata, t. (2017). a heteroskedasticity robust breusch–pagan test for contemporaneous correlation in dynamic panel data models. journal of econometrics, 198(2), 209-230. harris, d., & kew, h. (2017). adaptive long memory testing under heteroskedasticity. econometric theory, 33(3), 755-778. im, k. s. (2000). robustifying glejser test of heteroskedasticity. journal of econometrics, 97(1), 179-188. jovičić, m. (2011). ekonometrijski metodi i modeli. centar za izdavačku delatnost. ekonomski fakultet. beograd. kalina, j., & peštová, b. (2017). exact inference in robust econometrics under heteroscedasticity. 11th international days of statistics and economics msed 2017.[proceedings.] slaný: melandrium, 636-645.linton, o., & xiao, z. (2019). efficient estimation of nonparametric regression in the presence of dynamic heteroskedasticity. journal of econometrics, 213(2), 608-631. lütkepohl, h., & netšunajev, a. (2017). structural vector autoregressions with heteroskedasticity: a review of different volatility models. econometrics and statistics, 1, 2-18. lütkepohl, h., & velinov, a. (2016). structural vector autoregressions: checking identifying long‐run restrictions via heteroskedasticity. journal of economic surveys, 30(2), 377-392. mladenović, z. (2011). uvod u ekonometriju. centar za izdavačku delatnost. ekonomski fakultet. beograd. mladenović, z. i nojković, a., (2017). zbirka rešenih zadataka iz ekonometrije. centar za izdavačku delatnost. ekonomski fakultet. beograd. mladenović, z. i petrović, p., (2017). uvod u ekonometriju. centar za izdavačku delatnost. ekonomski fakultet. beograd. djalic and terzic/decis. mak. appl. manag. eng. 4 (1) (2021) 1-18 18 moussa, r. k. (2019). heteroskedasticity in one-way error component probit models. econometrics, 7(3), 35. ou, z., tempelman, r. j., steibel, j. p., ernst, c. w., bates, r. o., & bello, n. m. (2016). genomic prediction accounting for residual heteroskedasticity. g3: genes, genomes, genetics, 6(1), 1-13. sato, t., & matsuda, y. (2017). spatial autoregressive conditional heteroskedasticity models. journal of the japan statistical society, 47(2), 221-236. taşpınar, s., doğan, o., & bera, a. k. (2019). heteroskedasticity-consistent covariance matrix estimators for spatial autoregressive models. spatial economic analysis, 14(2), 241-268. white, h. (1980). a heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity. econometrica: journal of the econometric society, 817-838. © 2018 by the authors. submitted for possible open access publication under the terms and conditions of the creative commons attribution (cc by) license (http://creativecommons.org/licenses/by/4.0/). plane thermoelastic waves in infinite half-space caused decision making: applications in management and engineering vol. 4, issue 2, 2021, pp. 163-177. issn: 2560-6018 eissn: 2620-0104 doi: https://doi.org/10.31181/dmame210402163s * corresponding author. e-mail addresses: haresh_fosc@sgtuniversity.org (haresh. sharma), kriti.kri89@gmail.com (kriti. kumari), kar_s_k@yahoo.com (samarjit. kar) forecasting sugarcane yield of india based on rough set combination approach haresh kumar sharma1, kriti kumari 2 and samarjit kar 3 1 department of mathematics, shree guru gobind singh tricentenary university, gurugram, india 2 department of mathematics, banasthali vidyapith, jaipur, rajasthan, india 3 department of mathematics, national institute of technology durgapur, west bengal, india received: 26 january 2021; accepted: 8 june 2021; available online: 17 june 2021. original scientific paper abstract: this study applied a combination approach using rough set approach for forecasting sugarcane production in india. the rough set is a new mathematical approach that can deal with qualitative time series data. the method of combining forecast values based on rough set, the original time series and single forecasts obtained from single models are taken as condition and decision attributes. finally, the decision table based on actual time series and single forecasting results are used to calculate the weights for the combination of forecasts. moreover, dependency, importance and weights are also calculated for time series through condition and decision attributes. the paper uses autoregressive integrated moving average (arima), double exponential smoothing (des) and grey model (gm) to generate the single forecasts. to validate our proposed analysis, sugarcane production data from 1950 to 2011 was used for the overall empirical analysis and out-sample forecasts were generated from 2012 to 2021 for the comparative analysis. also, arima (2, 1, 1) model was found more appropriate for forecasting sugarcane production. key words: sugarcane, forecast, time series models, rough set combination. 1. introduction india produces the largest amount of sugarcane and thus lands second on the list of top sugarcane-producing nations just after brazil according to foreign agriculture service (fsc) 2020. it reported uttar pradesh has the largest contribution amounting to 38.61% of the overall sugarcane production in the fiscal year 2020-21(icarsugarcane report and molasses production, 2019). then come maharashtra and mailto:kriti.kri89@gmail.com sharma et al./decis. mak. appl. manag. eng. 4 (2) (2021) 163-177 164 karnataka as the second and third largest sugarcane producing states. some other contributors on the list are bihar, tamil nadu, haryana, gujarat and andhra pradesh. thus sugarcane being such a precious commodity it becomes really significant for the indian economy to have highly reliable and accurate forecast of its productions. there exists an effective relationship between the productivity and the price of the crop. with an unanticipated fall in the production, the market stock of the crop declines, thereby reducing the income of the farmer which is followed by the price rise as its consequence. oppositely, if the market is flooded with the crop, it leads to a sudden fall in the prices and thus affecting the income of the farmer. therefore, it can be concluded that these repercussions due to the variations in the prices of the commodity play a significant role in the formulation of the significant economic policies like inflation rate, gdp, wages, salaries etc. apart from this it also affects the production level of other industries which further processes sugarcane and its byproducts thereby affecting their profit margins. from the last few years, various literature has been applied single time series models in the area of time series forecasting. arima models are very popular to forecast sugarcane production (bajpai and venugopalan, 1996; kumar and madhu, 2014). for example, venugopalan and srinath (1998), suresh at al. (2011) and tsitsika et al. (2007) applied arima models for modelling and forecasting of fish catches. also, hanson et al. (1999) compare the forecasting efficiency of neural network models with arima models. arima models were used to forecast the production and productivity of a variety of crops of tamilnadu (balanagammal et al. 2000). boken (2000) used arima models for the forecasting of wheat production in pakistan and canada. balasubramanian and dhanavanthan (2002) have applied arima models to forecast seasonal paddy in tamilnadu and food grains in india. ravichandran and prajneshu (2001) and prajneshu et al. (2002) were compared the accuracy of structural time series models with arima models. maccioitta et al. (2002) use arma models to forecast milk, fat and protein yields of italian simmental cows. state level agricultural production forecasting was also done by applying arima models (indira and datta, 2003). also, chandran and prajneshu (2005) compare the forecasting performance of arima models with nonparametric regression approach for the forecasting of oilseed production in india. forecasting of irrigated crops like potato, mustard and wheat were forecasted by employing arima models (sahu, 2006). milk production in india was also predicted using different time series methods (pal et al. 2007). also, there are different time series models, such as econometric, smoothing models and different combination approaches. in recent years, rough set (rs) approach has been widely used in combine forecasting approach. for example, bao et al. (2006) employed a combination approach based on rough set theory to determine the weighting coefficient in predicting the future of electric power load from 19942000 in zhejiang. xiao et al. (2009) examined the forecasting of international trade in the chongqing municipality of china using a combined approach based on the rough set, which he goes on to compare with individual models. ahmed et al. (2009) applied a combination of forecasts based on rough set. suo et al. (2013) evaluated the weight coefficient by using rough set theory to combine the forecasts of the quadratic curve, grey, and cubic exponent smooth models for forecasting agriculture machinery total power from the period of 1996 to 2008. they explain that rough set combination approach is higher than the individual forecasting methods. zhou and zhang (2013) employed rough set combination method by using support vector machine and neural network to predict the chinese co2 emissions from 1990 to 2011. sharma et al. (2019) proposed hybrid rough set based forecasting model and applied on tourism demand of air transportation passenger data set in australia tourism demand. tang et al. forecasting sugarcane yield of india based on rough set combination approach 165 (2021) applied hybrid fuzzy rough set models in missing traffic data. patra and barman (2021) employed rough set based dependency measure to reduce dimensionality of hyperspectral images. ala'raj et al. (2021), proposed seird dynamic model for forecasting of covid-19 data and applied arima correction model for validation of data set. for instance, jahangir (2020) employed rough set based artificial neural networks model to predict multimodal short-term wind speed. li and wang (2019) proposed hybridized nmgm-arima and nmgm-bp models to forecast india's dependence on foreign oil. sharma et al. (2018) applied rough set based forecasting methods in airline data. wang et al. (2018) used hybrid arima and metabolic nonlinear grey model to forecasting u.s. shale gas monthly production. also sharma et al. (2020) applied rough set theory for forecasting model’s ranking. rough set theory has been successfully applied to various real life decision making problems (karavidić, and projović, 2018; roy et al., 2018; roy et al., 2019; sharma et al., 2018a; stanković et al. 2019). other soft computing approach use to tackle imprecision and vagueness of a data, which has been successfully applied to various real life problems (karavidić and projović, 2018; žižović and pamucar, 2019, vasiljević et al. 2018; mukhametzyanov and pamucar, 2018). moreover, elgabbanni et al. (2014) applied rough set combination model (rsc) with an appropriate weight coefficient to forecast traffic accident time series data for washington dc in the us from 1982-2008. they reveal that the combination method outperforms other individual methods. additionally, the main concern in the combination of forecasts is that how to evaluate some appropriate weight coefficient to combine the forecasts of various single time series models. there have been various ways of determining the weight coefficient in the combination approach such as simple average, the inverse of mape, variancebased, the inverse of mean square error etc. however, previous researchers have not been yet studied the rough set theory in sugarcane production literature, to the best of our knowledge. hence, the main objective of our study is to forecast sugarcane production in india using a novel rough set combination (rsc) approach. the study aims to apply an appropriate way to combine the different single models to improve the forecasting accuracy of single time series models. arima, des and gm models have been combined by applying rough set theory to forecast sugarcane production in india for the period 1950 to 2011. we also study the comparative analysis of single time series and rough set combination methods by underlying mean absolute percentage error (mape) criterion. the remaining of the study is organized as follows. a methodology section discusses the rough set theory. the next section illustrates the procedures of rough set combination method to the study of sugarcane production. data section explains the data. empirical results section describes the results of the empirical study, which includes time series models and their combination. performance comparison of different models section presents the different performance criteria used in the forecasting comparison and the last section gives the conclusions. 2. research methodology rough set is a very useful classification technique for categorical variables like low, average and high. in this method, time series data has been arranged in an information table (decision table) with their objects (data points) by using a dependent and independent time series variables. then, time series variables are transformed into condition and decision variables (attributes). table 1 shows the hypothetical example sharma et al./decis. mak. appl. manag. eng. 4 (2) (2021) 163-177 166 of decision table based on actual time series and single forecasts. these attributes are categorized into different grades like low, average or high and true or false etc. hence, the applications of rough set are applied to generate the weights by establishing the relationships between single forecasts and actual time series. the method of combining forecast values based on rough set, the original time series and single forecasts obtained from single models are taken as condition and decision attributes. finally, the decision table based on actual time series and single forecasting results is used to calculate the weights for the combination of forecasts. table 1. hypothetical example of decision table is a table. condition attributes decision attribute time 𝑋𝑡 (1) ̂ 𝑋𝑡 (2) ̂ 𝑋𝑡 (3)̂ ⋯ 𝑋𝑡 (𝑁)̂ 𝑦 (𝑡) 𝑡1 𝑡2 𝑡3 ⋯ ⋯ 𝑡𝑚 𝑋1(1) ̂ 𝑋1(2) ̂ 𝑋1(3)̂ ⋯ 𝑋1(𝑁)̂ 𝑋2(1) ̂ 𝑋2(2) ̂ 𝑋2(3)̂ ⋯ 𝑋2(𝑁)̂ 𝑋3(1) ̂ 𝑋3(2) ̂ 𝑋3(3)̂ ⋯ 𝑋3(𝑁) ̂ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ 𝑋𝑚 (1) ̂ 𝑋𝑚 (2) ̂ 𝑋𝑚 (3) ̂ ⋯ 𝑋𝑚 (𝑁) ̂ 𝑦(1) 𝑦(2) 𝑦(3) ⋯ ⋯ 𝑦(𝑚) in table 1, 𝑋𝑡 (1) ̂ , 𝑋𝑡 (2) ̂ 𝑋𝑡 (3), ̂ … , 𝑋𝑡 (𝑁) ̂ and 𝑦(𝑡) are single forecasts called independent variable sand actual time series called dependent variables, respectively and𝑋𝑚 (1) ̂ , 𝑋𝑚 (2) ̂ 𝑋𝑚 (3), ̂ … , 𝑋𝑚 (𝑁) ̂ 𝑎𝑛𝑑 𝑦 (𝑚) are the objects (data points) of time series variables. further, these variables are transformed into condition and decision attributes using normalization. the normalization is used to convert quantitative data into qualitative data. the normalization (𝑁𝑡) technique is defined as: 𝑁𝑡 = 𝑍𝑘𝑡−𝑍𝑚𝑖𝑛 𝑍𝑚𝑎𝑥−𝑍𝑚𝑖𝑛 where, 𝑍𝑘𝑡 is the set of actual and single forecasts time series variables, 𝑍𝑚𝑖𝑛 𝑎𝑛𝑑 𝑍𝑚𝑎𝑥 are the minimum and maximum values of 𝑍𝑘𝑡 , k =1,2, . . . . further, actual and single forecasts are transformed into qualitative normalized values (nv) like low, average and high which are defined as; low(l) (if 0 < nv <0.4), average(a)(if 0.4 < nv≤0.8), high(h)(if nv > 0.8). 3. rough set theory (rst) the rough set theory (rst) is a new mathematical technique to handle imprecision, vagueness, and uncertainty (pawlak, 1982). for the evaluation of a vague description of the objects, it is the excellent mathematical tool. the adjective vague express the information quality that is uncertainty or ambiguity that chase from information granulation. the main aim of the rough set theory is the approximation of a set by a pair of two crisp sets called the lower and upper approximations of the sets. forecasting sugarcane yield of india based on rough set combination approach 167 let 𝑈 be the non-empty finite set of objects referred to as universe and a be a nonempty finite set of attributes, then 𝑆 = (𝑈, 𝐴, 𝐶, 𝐷) is called an information system, where 𝐶 and 𝐷 are condition and decision attribute, respectively. for 𝑆 = (𝑈, 𝐴, 𝐶, 𝐷) and p ⊆ a, r ⊆ u can be approximated based on the knowledge having in 𝑃by assembling the p-lower and p-upper approximation of r, represents by 𝑃(𝑅) and 𝑃(𝑅) respectively; where p(r) = {𝑥|[𝑥]𝑃 ⊆ r} (1) p(r) = {𝑥|[𝑥]𝑃 ∩ r ≠ ∅} (2) the objects in 𝑃(𝑅) is known as the set of all members of 𝑈 which can be certainly classified as an object of r in the knowledge p whereas objects in 𝑃(𝑅) is the set of all elements of 𝑈 that can be possibly classified as an object of 𝑅 involving knowledge 𝑃. the boundary region of 𝑅 is expressed as: 𝐵𝑁𝑝 (𝑅) = p(r) − 𝑃(𝑅) is the set of a member which cannot decisively classify into r consisting knowledge p. if lower approximation and upper approximation set are similar then boundary region of set is empty set. in the opposing case, if the boundary region having some member (object) than the set r is referred as rough set concerning p. rst gives an accuracy measure on the quality of classification (pawlak, 1982; pawlak and skowron, 2007). the quality of classification illustrates the ratio of all correctly classified objects of the data set, is calculated in the following manner: 𝛾𝐶 (𝐷) = |𝑃𝑂𝑆𝐶(𝐷)| |𝑈| where, |𝑈| is the cardinality of the universal set (objects) and |𝑃𝑂𝑆𝐶 (𝐷)| is the cardinality of union the of all lower approximation of 𝐷 𝑜𝑛 𝐶. 4. autoregressive integrated moving average (arima) box-jenkins (1976) introduced arima model for modeling a time series with the trend and seasonal component. it is the combination of autoregressive (ar) and moving average (ma) models. arima model for a time series, say arima model for a time series, say 𝑋𝑡(𝑡 = 1, 2. . . . 𝑇), is given by 𝜑𝑝(𝐵)∆ 𝑑 𝑋𝑡 = 𝜃𝑞 (𝐵)𝑎𝑡 , where 𝜑𝑝(𝐵) = 1 − 𝜑1(𝐵)−, … , −𝜑𝑝(𝐵 𝑝 ). 𝑎𝑛𝑑 𝜃𝑞 (𝐵) = 1 + 𝜃1(𝐵) + ⋯ + 𝜃𝑞 (𝐵 𝑝 ) are ar and ma models, respectively, b is the backshift operator, ∆𝑑 ∆𝑠 𝐷 𝑋𝑡 = (1 − 𝐵)(1 − 𝐵𝑠 )𝑋𝑡,𝜑𝑝 < 1, 𝜃𝑞 < 1. 5. grey model grey model developed by deng (1982). in this model, future trend is estimated using linear differential equation of order one. the parameters involved in the model can be estimated using the ordinary least squares (ols) method wang (2004) and xu et al. (2016)). the grey model of first order linear differential equation is written as 𝑑𝑋𝑡 𝑑𝑡 + 𝑎 ∗ 𝑋𝑡 = 𝑏. where 𝑋𝑡 is a time series and 𝑎 & 𝑏 are the parameters. sharma et al./decis. mak. appl. manag. eng. 4 (2) (2021) 163-177 168 6. combination forecast based on rough set because the combination method yields better results than a single method, the modelling, and forecasting approach with high accuracy is adopted in this study. there are three main steps involved in the combined approach, i.e. single forecasts, computation of weight coefficient and forecast combination. let: 𝑋𝑡 (𝑡 = 1, 2, …,n) is an actual time series with time 𝑡 and 𝑋1𝑡 , 𝑋2𝑡 , 𝑋3𝑡 , . . , 𝑋𝑗𝑡 (𝑗 = 1, 2, . . , 𝑚) respectively, are 𝑚𝑡ℎ , forecasting value of single forecasts at time 𝑡 and 𝑊𝑗 (𝑗 = 1, 2, … , 𝑚) is the 𝑚𝑡ℎ weight coefficient of single forecasts 𝑋𝑚𝑡 and then the combined forecasting approach can be written as follows: 𝑍𝑐,̂(𝑡) = ∑ 𝑊𝑖 𝑚 𝑖=1 ∗𝑋𝑚𝑡 ∑ 𝑊𝑖 𝑚 𝑖=1 where, 𝑍𝑐,̂(𝑡) rough set combination forecasted values. also, the overall procedure is described in figure 1. figure 1. the framework of the proposed work. 6.1. weight determination based on rough set let 𝑆 = (𝑈, 𝐴, 𝐶, 𝐷) denote the rough set decision table, where 𝑈 represents the universal set of time points in time series, and 𝐶 = 𝑋1𝑡 , 𝑋2𝑡 , 𝑋3𝑡 , . . , 𝑋𝑗𝑡 (𝑗 = 1, 2, . . , 𝑚) indicate the set of single forecasts, condition attributes and 𝐷 is the decision attribute, 𝑋𝑡 in order to determine the weight coefficient. the overall procedures of deriving a weight coefficient are as described below1 as the several steps: step 1: input the actual time series 𝑋𝑡 = (𝑡 = 1, 2, … , 𝑛) and forecasts it with respective single forecasts, 𝑋1𝑡 , 𝑋2𝑡 , 𝑋3𝑡 , . . , 𝑋𝑗𝑡 (𝑗 = 1, 2, . . , 𝑚). step 2: to construct the rough set decision table, 𝑆 by arranging the condition and decision attributes. step 3: compute the dependence (ahmed et al., 2009) of decision attribute (𝐷) on the condition attributes (𝐶). forecasting sugarcane yield of india based on rough set combination approach 169 step 4: evaluate the dependence of each attribute concerning 𝐷 using the following expression. 𝛾𝐶−{𝐶 ′}(𝐷) = |𝑃𝑂𝑆 𝐶−{𝐶′} (𝐷)| |𝑈| where 𝐶 ′ is the subset of 𝐶. step 5: calculate the importance and weight coefficients of each single forecasts and then combine each single forecasts. 6.2. data our empirical analysis uses yearly sugarcane production from 1950 to 2011 time series data in india. time series data are obtained from sugar and molasses production (2019). the r-3.0.3 software is used for the overall empirical analysis of arima, des and grey models. also, the weight coefficient has been calculated for rough set combination method via rough set data explorer (rose2) software (predki et al. 1998). the whole time series is divided into two periods (1) 1950-2011, in-sample, consists of 61 observations for the modelling process of the several methods; (2) data of 2012-2021 are used to generate the out-of-sample forecasts for different models. 6.3. comparison criterion the performance of all respective models for seasonal time series has been evaluated using mean absolute percentage error (mape) criterion for measuring level prediction accuracy, as follows: 𝑀𝐴𝑃𝐸 = ∑ (|𝐴𝑐𝑡−𝑃�̂�𝑡|)/𝐴𝑐𝑡 𝑛 𝑡=1 𝑛 (3) where 𝐴𝑐𝑡 (𝑡 = 1, 2, … , 𝑛) is the actual value, 𝑃�̂�𝑡 (t = 1, 2,…, n) represents the predicted values and n is the total number of observations. lewis (1982) demonstrates that the value of mape being less than 10% indicates the high accuracy of forecasting. moreover, when it lies between 10-20% forecasting is good, 20-50% is reasonable, and more than 50% denotes inaccuracy in forecasting. 7. results according to the forecasting results of every single model (see table 1), we apply the discretization method to discrete data into three grades (0, 1, 2). the transformed discrete value, expressed as an attribute value is exhibited in table 3. consequently, decision table 4 can be a build-up for the evaluation of weights 𝑊𝑗 (𝑗 = 1, 2, … , 𝑚) by using rough set in the combination forecast. further, dependence and importance can be calculated through equation 1 and 2. in next step weights are computed by normalizing the importance of each single model. finally, the combined forecasting model can be established as: 𝑋�̂� = 0.2683 𝐴𝑅𝐼𝑀𝐴 + 0.4756 𝐷𝐸𝑆 + 0.2561 𝐺𝑅𝐸𝑌. (4) the evaluated results of dependence, importance and weights are given in table 4. moreover, we applied the actual and forecasts values from 1950 to 2011 for the evaluating the weight coefficient by using rough set. table 2 gives the sugarcane production and forecasts obtained from arima, des and grey models. further, arima, des and grey models are considered the three condition attributes and actual values is the decision attributes in order to establish the rough set theory and then sharma et al./decis. mak. appl. manag. eng. 4 (2) (2021) 163-177 170 these attributes have been normalized (yuan and xu, 2013), such as shown in table 1. where 𝑋1𝑡 represents the forecasts of the arima model, 𝑋2𝑡 , indicates the forecasts of des model and 𝑋3𝑡 , denote the forecasts of grey model. also, 2012-2018 out-ofsample forecasts were generated for different models. further simple average and inverse of mape combination methods (bates and granger 1969, menezes et al. 2000, armstrong 2001, aiolfi and timmermann 2006, andrawis et al. 2011) are also employed for the prediction of sugarcane production in india. the forecasting results of arima, des and grey models are combined using the weight coefficient 𝑊𝑗 (𝑗 = 1, 2, … , 𝑚) obtained from simple average and the inverse of mape combination methods. according to the forecasting results of each single model weights are computed by the inverse of mape obtained from each single model. in the simple average method, each single forecast has equal weight. finally, the combined forecasting model using simple average and inverse of mape methods can be established as: 𝑋�̂� = 0.5 𝐴𝑅𝐼𝑀𝐴 + 0.5 𝐷𝐸𝑆 + 0.5 𝐺𝑅𝐸𝑌. (5) 𝑋�̂� = 0.2309 𝐴𝑅𝐼𝑀𝐴 + 0.2477 𝐷𝐸𝑆 + 0.1663𝐺𝑅𝐸𝑌. (6) table 2. actual and forecasts values of models year arima des gm actual 1952 59.69924 66.21 80.05650072 51 1953 44.34212 55.58 82.16917439 44.41 1954 49.33245 48.99 84.33760107 58.74 1955 68.02704 63.32 86.56325205 60.54 1956 51.24382 65.12 88.84763749 69.05 1957 71.69061 73.63 91.19230735 71.16 1958 66.22566 75.74 93.59885255 73.36 1959 73.1471 77.94 96.06890595 77.82 1960 78.06447 82.4 98.60414353 110 …… ………… ……… …………….. ……. 2002 302.09588 303.01 294.450717 281.58 2003 273.44244 286.16 302.2212075 233.87 2004 227.62409 238.45 310.1967595 237.09 2005 269.76906 241.67 318.3827847 281.18 2006 293.25304 285.76 326.7848373 355.52 2007 353.68756 360.1 335.4086182 348.19 2008 297.20754 352.77 344.2599788 285.03 2009 269.79121 289.61 353.3449249 292.31 forecasting sugarcane yield of india based on rough set combination approach 171 table 3. decision table for rough set u arima des gm actual values 1 0 0 0 0 2 0 0 0 0 3 0 0 0 0 4 0 0 0 0 5 0 0 0 0 6 0 0 0 0 7 0 0 0 0 8 0 0 0 0 9 0 0 0 0 …….. …….. …….. …….. …….. 51 2 2 2 1 52 2 2 2 2 53 1 2 2 2 54 2 2 2 2 55 2 2 2 2 56 2 2 2 2 57 2 2 2 2 58 2 2 2 2 table 4. estimated parameters for models models dependency importance weight arima 0.6207 0.3793 0.268284057 des 0.3276 0.6724 0.47559768 gm 0.6379 0.3621 0.256118263 7.1. discussion 7.1.1. performance comparison of different models the forecasting performance of different models has been evaluating using mape criterion. table 5 reports the mape results for each of the individual models and rough set combination (rsc) method for the out-of-sample from the period of 2012 to 2018. regarding mape values, the forecasting accuracy of rsc, 𝑤1 = 0.2685, 𝑤2 = 0.4756 𝑎𝑛𝑑 𝑤3 = 0.2561 is better than the individual forecasting methods. the combining method inverse of mape outperforms the arima, des, grey, simple average and rough set combination based on mape (lewis 1982) for out-of-sample forecasts. also, the forecasting performance of arima model is highly accurate than des, grey and combination methods. for better understanding figure 2 compares the actual and forecasting values for each model based on out-of-sample forecasts. the sharma et al./decis. mak. appl. manag. eng. 4 (2) (2021) 163-177 172 forecasting curves of all models are good fitted with the actual curve but the arima suggest the best fit for the prediction of sugarcane production. all these confirmed that the forecasting results of imape and arima models are higher than the other single and combination time series models. since arima can forecast more accurately for the sugarcane production according to the out-of-sample forecasts. consequently, arima is used to predict the sugarcane production for the next three years from 2019 to 2021 with the forecasting results of des, grey, a simple average, imape and rough set combination models. results of forecasting using hybrid model (rsc model) are presented in table 6. the obtained out-of-sample forecasts results of mape for the next three years are shown in table 5. table 5. mape of forecasting models models in-sample out-of-sample arima 7.8 0.85 des 9.2 12.1 gm 13.8 7.85 sa 8.1 1.35 imape 7.6 2.59 rsc 8.2 3.71 figure 2. comparative analysis of different models. forecasting sugarcane yield of india based on rough set combination approach 173 table 6. forecasts for the next ten years year arima des gm sa imape rsc 2012 319.5522 306.05 382.0637366 335.888646 2485.14905 329.140 2013 305.7307 310.63 392.1463156 336.169005 2497.903017 330.193 2014 318.7837 315.21 402.4949718 345.496224 2564.058917 338.524 2015 331.7398 319.79 413.1167267 354.882176 2630.777165 346.898 2016 327.1773 324.37 424.0187875 358.522029 2663.934317 350.645 2017 317.3611 328.95 435.2085512 360.50655 2687.587675 353.055 2018 317.3513 333.53 446.6936104 365.858303 2731.273146 358.172 2019 323.7363 338.11 458.4817579 373.442686 2788.326094 365.083 2020 325.6805 342.69 470.5809919 379.650497 2837.53279 370.881 2021 322.1153 347.27 482.9995222 384.128274 2876.82159 375.284 8. conclusions since single forecasting performs always well. hence, this paper applied a novel combination of forecasts by underlying rough set (rs) approach for the prediction of sugarcane production to india for the period of 1950 to 2011 in order to improve the performance of single models. the forecasting results of autoregressive integrated moving average (arima), des and grey models are combined using the weight coefficient obtained from simple average, the inverse of mape (imape) and rough set combination methods. moreover, the performance of several forecasts has been evaluated under mean absolute percentage error (mape) criterion. our empirical study suggests the following outcomes. first, all of the single forecasting models appeared to provide the accurate and reliable forecasting results according to the less than 10% mape values. secondly, the arima and imape models have better accuracy than the other models according to mape values. further, arima performance is highly accurate among all different approaches. in addition, combination methods are found to be effective for the forecasting of sugarcane production in india. the contribution of the article is that the combination of the forecast with the rough set approach firstly opts in agriculture empirical study. the obtained results suggest that arima a combination method is an effective way for sugarcane forecasting. it is important to describe the importance of single model and dependency of sugarcane production for better forecasting performance. it is expected that future study would benefit from concentrating on other single methods for agriculture forecasting. author contributions: haresh kumar sharma contributed to the research designing, detailed data analysis through selected methodology, structuring, writing, and editing of the manuscript. kriti kumari participated in the data collection and preliminary analysis. samarjit kar has supervised the research and editing of the manuscript. funding: this research received no external funding. sharma et al./decis. mak. appl. manag. eng. 4 (2) (2021) 163-177 174 acknowledgements: we wish to express our most profound appreciation to the editors and the anonymous reviewers. conflicts of interest: the authors reported no potential conflict of interest. references ahmed, e. f., yang, w. j., & abdullah, m. y. (2009). novel method of the combination of forecasts based on rough sets, journal of computer science, 440-444. aiolfi, m., & timmermann, a. (2006). persistence in forecasting performance and conditional combination strategies. journal of econometrics, 135, 31–53. ala’raj, m., ajdalawieh, m., & nizamuddin, n. (2021). modeling and forecasting of covid-19 using a hybrid dynamic model based on seird with arima corrections. infectious disease modelling, 6, 98-111. andrawis, r. r., atiya, a. f., & shishiny, h. e. (2011). combination of long term and short-term forecasts, with application to tourism demand forecasting. international journal of forecasting, 27, 870–886. armstrong, j. s. (2001). principles of forecasting: a handbook for researchers and practitioners. kluwer academic publishers. new york. (chapter 13). bajpai, p. k., & venugopalan, r. (1996). forecasting sugarcane production by time series modeling. indian journal of sugarcane technology, 11(1), 61–65. balanagammal, d., ranganathan, c. r., & sundaresan, k. (2000). forecasting of agricultural scenario in tamilnadu—a time series analysis. journal of indian society of agricultural statistics, 53(3), 273–286. balasubramanian, p., & dhanavanthan, p. (2002). seasonal modeling and forecasting of crop production. statistics and applications, 4(2), 107–118. bao, y., huang, m., zheyan., y. h., & li, x. (2006). application of combination forecasting based on rough sets theory on electric power system, proceedings of the 6th congress on intelligent control and automation. june 21-23. dallan, china: 17451748. bates, j. m., & granger, c. (1969). the combination of forecasts. operational research quarterly, 20, 451-468. boken, v. k. (2000). forecasting spring wheat yield using time series analysis: a case study for the canadian prairies. agricultural journal, 92(6), 1047–1053. box, g. e. p., & jenkins, g. m. (1976). time series analysis: forecasting and control, revised edition, san francisco: holden day. chandran, k. p., & prajneshu. (2005). nonparametric regression with jump points methodology for describing country’s oilseed yield data. journal of indian society of agricultural statistics, 59(2), 126–130. deng, j. (1982). control problems of grey systems. systems and control letters, 1(1), 288-294. forecasting sugarcane yield of india based on rough set combination approach 175 elgabbanni, b. o. s., khozium, m. o., & ahmed, m. a. (2014). combination prediction model of traffic accident using rough set technology approach. international journal of enhanced research in science technology engineering, 3(1), 47-56. hanson, j. v., macdonald, j. b., & nelson, r. d. (1999). time series prediction with genetic algorithm designed neural networks: an empirical comparison with modern statistical models. computational intelligence, 15(3), 171–184. icar-sugarcane report and molasses production (2019). https://sugarcane.icar.gov.in/index.php/en/sugar-stats/sugarcane-statistics. (accessed 13 november 2020). indira, r., & datta, a. (2003). univariate forecasting of state-level agricultural production. economic and political weekly, 38, 1800–1803. jahangir, h., masoud aliakbar g. m., alhameli, f., mazouz, a., ahmadian, a., & elkamel, a. (2020). short-term wind speed forecasting framework based on stacked denoising auto-encoders with rough ann. sustainable energy technologies and assessments, 38. karavidić, z., & projović, d. (2018). a multi-criteria decision-making (mcdm) model in the security forces operations based on rough sets. decision making: applications in management and engineering, 1(1), 97-120. kumar, m., & madhu, a. (2014). an application of time series arima forecasting model for predicting sugarcane production in india studies in business and economics. 9(1), 81-94. lewis, c. d. (1982). international and business forecasting methods. london: butterworths. li, s., & wang, q. (2019). india's dependence on foreign oil will exceed 90% around 2025 the forecasting results based on two hybridized nmgm-arima and nmgm bp models. journal of cleaner production, 232, 137-153. maccioitta, n. p. p., vicario, d., pulina, g., & cappio-borlino, a. (2002). test day and lactation yield predictions in italian simmental cows by arma methods. journal of dairy science, 85, 3107–3114. menezes, l. m. d., bunn, d. w., & taylor, j. w. (2000). review of guidelines for the use of combined forecasts. european journal of operational research, 120, 190-204. mukhametzyanov, i., & pamucar, d. (2018). a sensitivity analysis in mcdm problems: a statistical approach. decision making: applications in management and engineering, 1(2), 51-80. patra, s., & barman, b. (2021). a novel dependency definition exploiting boundary samples in rough set theory for hyperspectral band selection. applied soft computing, 99. 106944. pawlak, z. & skowron, a. (2007). rudiments of rough sets, information sciences. an international journal, 177(1), 3-27. pawlak, z. (1982). rough sets. international journal of computer and information science, 11, 341-356. https://sugarcane.icar.gov.in/index.php/en/sugar-stats/sugarcane-statistics sharma et al./decis. mak. appl. manag. eng. 4 (2) (2021) 163-177 176 predki, b., wong, s. k. m., stefanowski, j., susmaga, r., & wilk. s. (1998). rose-software implementation of the rough set theory. in l. pollkowski, a. skowron (eds.). rough sets and current trends in computing. lecture notes in artificial intelligence. berlin: springer, 605-608. roy, j., adhikary, k., kar, s., & pamucar, d. (2018). a rough strength relational dematel model for analysing the key success factors of hospital service quality. decision making: applications in management and engineering, 1(1), 121-142. roy, j., sharma, h., kar, s., kazimieras, z. e., & saparauskas, j. (2019). an extended copras model for multi-criteria decision-making problems and its application in web-based hotel evaluation and selection. economic research – ekonomska istraživanja, 32 (1), 219-253. sahu, p. k. (2006). forecasting yield behavior of potato, mustard, rice, and wheat under irrigation. journal of vegetable science, 12(1), 81–99. sharma, h. k., kumari, k., & kar, s. (2019). short-term forecasting of air passengers based on hybrid rough set and double exponential smoothing models, intelligent automation and soft computing, 25(1), 1-13. sharma, h. k., kumari, k., & kar, s. (2020). a rough set approach for forecasting models. decision making: applications in management and engineering, 3(1), 1-21. sharma, h. k., kumari, k., kar, s. (2018). air passengers forecasting for australian airline based on hybrid rough set approach. journal of applied mathematics, statistics and informatics, 14(1), 5–18 sharma, h., roy, j., kar, s. & prentkovskis, o. (2018a). multi-criteria evaluation framework for prioritizing indian railway stations using modified rough ahp-mabac method. transport and telecommunication journal, 19(2), 113-127. suo, r., huang, m., & liu. y. (2013). the application of combination forecasting method in total power of agriculture machinery based on rs. advanced materials research, 601, 476 – 483. suresh, k. k., & krishna, s. r. (2011). forecasting sugarcane yield of tamilnadu using arima models. sugar tech, 13(1), 23–26 tang, j., zhang, x., yu, t., & liu, f. (2021). missing traffic data imputation considering approximate intervals: a hybrid structure integrating adaptive network-based inference and fuzzy rough set, physica a: statistical mechanics and its applications, in press. vasiljević, m., fazlollahtabar, h., stević, željko, & vesković, s. (2018). a rough multicriteria approach for evaluation of the supplier criteria in automotive industry. decision making: applications in management and engineering, 1(1), 82-96. wang, c. h. (2004). predicting tourism demand using fuzzy time series and hybrid grey theory. tourism management, 25 (3), 367-374. wang, q., li, s., li, r., & ma, m. (2018). forecasting u.s. shale gas monthly production using a hybrid arima and metabolic nonlinear grey model. energy, 160, 378-387. xiao, z., gong, k., & zoy, y. (2009). a combined forecasting approach based on fuzzy soft sets. journal of computational and applied mathematics, 228, 326-333. forecasting sugarcane yield of india based on rough set combination approach 177 xu, s., wangshu, s., jianzhou, w., yixin, z. & yining, g. (2016). using a grey-markov model optimized by cuckoo search algorithm to forecast the annual foreign tourist arrivals to china. tourism management, 52, 369-379. yuan, l., & xu, f. (2013). research on the multiple combination weight based on rough set and clustering analysis, procedia computer science, 17, 274 – 281. zhou, j., & zhang, x. (2013). combined forecasting model based on the rough set to predict the chinese co2 emissions, advanced materials research, 773, 831– 836. žižović, m., & pamucar, d. (2019). new model for determining criteria weights: level based weight assessment (lbwa) model. decision making: applications in management and engineering, 2(2), 126-137. © 2018 by the authors. submitted for possible open access publication under the terms and conditions of the creative commons attribution (cc by) license (http://creativecommons.org/licenses/by/4.0/). plane thermoelastic waves in infinite half-space caused decision making: applications in management and engineering vol. 1, issue 2, 2018, pp. 93-110 issn: 2560-6018 eissn: 2620-0104 doi:_ https://doi.org/10.31181/dmame1802091p * corresponding author e-mail addresses: ivanpetrovic1977@gmail.com (i. petrović), kankaras.milan@outlook.com (m. kankaraš) dematel-ahp multi-criteria decision making model for the determination and evaluation of criteria for selecting an air traffic protection aircraft ivan b. petrović 1 *, milan kankaraš 1 1 university of defence, military academy, pavla jurišića šturma 33,11000 belgrade, serbia received: 29 march, 2018; accepted: 31 august, 2018; available online: 31 august 2018. original scientific paper abstract: this paper describes an approach in the determination and evaluation of the criteria and attributes of criteria for selecting the air traffic protection aircraft. after collected initial criteria and attributes, the interaction between criteria and attributes of criteria for the selection of the aircraft especially for the protection of air traffic was evaluated by 45 respondents. data processing and criteria and attributes determination were carried out by the dematel method (by eliminating less significant criteria and attributes). furthermore, the weight values of each criterion and attribute were determined by the ahp method. prioritization was carried out using an eigenvector method. for determination reliability the consistency ratio was checked for each result. as a result the model for the selection of the aircraft was proposed. key words: aircraft; air traffic; attribute; criterion; consistency; protection. 1. introduction from an economic point of view air traffic can be one of the more profitable business activities of each country. the organization and implementation of air traffic is complex process, which includes the need for continuous improvement (menon, sweriduk & bilimoria, 2004; chen, chen & sun, 2017; menon & park, 2016; steiner, mihetec & božičević, 2010; durso & manning, 2008; abbass, tang, amin, ellejmi & kirby, 2014). but, the issue of improving the protection of air traffic from aircraft threats has become particularly important since 9/11 (petrović, kankaraš & cvetković, 2015). mailto:ivanpetrovic1977@gmail.com mailto:kankaras.milan@outlook.com petrović & kankaraš /decis. mak. appl. manag. eng. 1 (2) (2018) 93-110 94 there are many approaches to address air traffic protection issues. the basic way is in the existence of duty–aircrafts (it is essentially a fighter aircraft that provide a rapid reaction in the case of airspace violation and other situations of violation of air traffic safety). some of small countries (in quantitative and qualitative terms) (gordić & petrović, 2014) give another countries the jurisdiction for the conducting of this mission. in the case study of the republic of serbia, which is a synonym for the small country, it can be noticed what are the criteria and how to prioritize them for the needs of equipping the country with the aircraft whose main purpose is to protect air traffic and intercept the aircraft that violated the airspace. the small area of the republic of serbia, and unusual, elongated form of territory allow for a short flight time over the territory and a simple and rapid airspace violation (petrović, 2013). therefore, it is necessary to determine which criteria and attributes of criteria are significant for the needs of equipping the country with the aircraft. the general objective of this paper is: determination and evaluation of criteria and attributes within the determinated criteria for selecting the aircraft for the purpose of air traffic protection from the airspace violation and other aviation threats using the dematel and ahp methods. this multi-criteria model consists of criteria and attributes that are significant for the selection of combat aircraft. the above stated research objective gives rise to following general hypothesis: using the dematel and ahp methods, it is possible to determinate and to evaluate the criteria for selecting the aircraft for the purpose of air traffic protection from the airspace violation and other aviation threats. the scientific and methodological contribution of paper is reflected in the new approach of determinating significant and eliminating less significant criteria attributes for the needs of selecting system with special role. also, the scientific contribution is reflected in increase of theoretical fund, which refers to the systematization of previous knowledge by the method of content analysis, and the gathering of relevant data about the criteria and attributes of criteria for the selection of the aircraft for the needs of conducting the missions during peacetime. the practical contribution is reflected in the fact that in the paper the model was created that could improve the process of equipping the system of defence with new equipment. also, modification of the model (by changing of criteria) enables its application in cases of procurement a wide range of equipment for the needs of realization of various forms of human activity. 2. materials and methodes the research was carried out in three phases: identification of initial criteria and attributes (for selection of combat aircraft), determination of significant criteria and attributes of criteria (for selection of aircraft), and prioritization of selected criteria and attributes (figure 1). in the first, all measures have been identified that enable selection of the combat aircraft by analyzing the contents of the relevant scientific fund (čokorilo, gvozdenović, mirosavljević & vasov, 2010; kirby, 2001; dagdeviren, yavuz & kilinc, 2009; petrović, cvetković, kankaraš & kapor, 2017). the selection and conceptual evaluation of military aircraft characteristics by applying the overall evaluation criterion (oec) was done by mavris & delaurentis (1995). the selection and evaluation of the criteria for equipping the army with combat aircraft using the ahp method was done by vlačić (2012). the identified measures are divided into general dematel-ahp multi-criteria decision making model for the determination and evaluation… 95 and specific measures using the classification method (based on the level of generality). the general measures represent criteria, and attributes are specific. taking into consideration the number and different significance of the identified criteria and attributes it was necessary to eliminate irrelevant and to evaluate significant criteria and attributes. it was carried out using the questionnaire, the dematel and the ahp method. based on these results, the model that provides a multi-criteria analysis of the selection of the aircraft for the air traffic protection from the airspace violation and other aviation threats was developed. figure 1. algorithm of a multi-criteria selection of the aircraft petrović & kankaraš /decis. mak. appl. manag. eng. 1 (2) (2018) 93-110 96 using the questionnaire and contents analysis of literature, the criteria and the attributes of each criterion (initial criteria and attributes) for the combat aircraft were selected. the following criteria are selected: aaerodynamics and mechanics of the flight, b construction and general systems, c propulsion, d – avionics and sensors, e integrated logistics support, f – armament, g – reconnaissance equipment, h – concept of pilot training and i – economy. the initial attributes of criterion aerodynamics and mechanics of the flight are: a1 – weight, a2 airspeed, a3 – acceleration performance, a4 – length of take off landing, a5 ceiling of flight, a6 – rate of climb, a7 – range of flight, a8 – maneuvering and stability performance, a9 – ability of supercruise and a10 – reaction time. the initial attributes of criterion construction and general systems are: b1 wing mechanization and flight control system, b2 – obstacle avoidance system, b3 – gps terrain-following, b4 – voice command system, b5 – oxygen system, b6 – radar crosssection and infrared signature, b7 – potential for modernization, b8– durability, b9 – ability of aerial refueling and b10 – possibility of ejection of pilot's seat. the initial attributes of criterion propulsion are: c1 – reliability and maintainability, c2 – maximum engine's thrust with afterburning, c3 – maximum engine's thrust without afterburning, c4 – thermal emission and c5 – maintenance system. the initial attributes of criterion avionics and sensors are: d1 – radars and other sensors, d2 – communication equipment, d3 – fire-control radar, d4 – electronic warfare equipment, d5 – multi-function display, d6 – navigation equipment, d7 – multimedia link. the initial attributes of criterion integrated logistics support are: e1 – reliability of aircraft, e2 convenience of maintenance, e3 – maintenance of aircraft, e4 – maintainability, e5 – ability of maintenance staff, e6 – maintenance equipment and e7 – infrastructure. the initial attributes of criterion armament are: f1 – capacity of locations for mounting armament, f2 – variety of armament, f3 – standardization of armament, f4 – number hardpoints of armament, f5 – under-fuselage hardpoints, f6 – possibility of using armament, f7 – safety work with armament on the ground, f8 – air – to – air missiles and rockets, f9 – bombs and other air to surface armament and f10 guns (cannons). the initial attributes of criterion reconnaissance equipment are: g1 possibility of reconnaissance in different weather conditions, g2 sensors range, g3 dataprocessing of reconnaissance information, g4 data-processing of reconnaissance photos and g5 data-processing of reconnaissance video. the initial attributes of criterion concept of pilot training are: h1 pilot training abroad, h2 individual training, h3 collective training and h4 simulators of flight. the initial attributes of criterion economy are: i1 – acquisition cost, i2 – life cycle costs and i3 – aircraft disposal costs. from initial criteria and attributes, the determination of criteria and attributes for the selection of the air traffic protection aircraft was preformed using the dematel method (moghaddam, sahafzadeh, alavijeh, yousefdehi, & hosseini, 2010; sumrit & anuntavoranich, 2013). by applying this method (decision – making trial and evaluation laboratory), based on the determination of direct and indirect influences between each criterion (attribute) on each citerion (attribute), criteria, which mutual impact on other criteria being less significant, were eliminated (moghaddam et al, 2010). dematel-ahp multi-criteria decision making model for the determination and evaluation… 97 each of the respondents (45 specialists – military pilots and officers of the aviation technical service) indicated the degree of direct and indirect influences between each criterion on each citerion and each attribute on each attribute of the criterion using the questionnarie. this step was done according to dematel method (sumrit & anuntavoranich, 2013). pairwise comparison was done as follows. the value of each pair is ranked by a number whose value is from 0 to 4 (0 – no influence; 1 – low influence; 2 – middle influence; 3 – high influence; 4 – very high influence) the assessment of each respondent is shown by a nonnegative matrix n n (for criterion 9n  ). each element of the k-matrix which is calculated by the equation 1 is a non-negative number k ij x , where is 1 k m  . k k ij n n x x      (1) matrices 1x , 2x ,..., mx represent individual preference (pairwise comparison) matrices of the respondents. the diagonal values are 0 because there is no influence between same criterions (sumrit & anuntavoranich, 2013). by calculating the means of the individual gathered values, a matrix of direct influences was created (table 1). table 1. matrix of direct influences of criteria k a b c d e f g h i a 0 3.85 3.92 3.45 3.73 3.68 0.45 0.54 3.9 b 2.17 0 2.04 3.12 1.45 1.72 0.53 0.34 1.14 c 2.94 1.11 0 1.73 1.14 0.94 0.52 0.32 0.85 d 3.65 3.2 3.91 0 3.17 3.2 0.61 0.29 3.23 e 3.42 3.17 2.12 1.92 0 2.73 0.45 0.34 2.45 f 3.18 2.57 3.14 3.22 2.72 0 0.32 0.35 2.74 g 0.51 0.42 0.37 0.32 0.41 0.38 0 0.42 0.39 h 0.33 0.42 0.44 0.41 0.37 0.39 0.51 0 0.19 i 3.92 3.17 2.93 3.45 3.15 3.08 0.28 0.32 0 in the second phase, the normalization of the matrix of direct influences is calculated using the following equation: 1 1 1 1 max max , max n n i ij i n iji j x d x x                (2) d – normalized matrix of direct influences, x – element of the mean value matrix of estimation of mutual influence. each element of the matrix of direct influences of criteria is divided with the maximum value of the sum of the columns and rows of the matrix of direct influence and new matrix is formed – normalized matrix of direct influence of criteria (table 2). petrović & kankaraš /decis. mak. appl. manag. eng. 1 (2) (2018) 93-110 98 table 2: normalized matrix of direct influence of criteria k a b c d e f g h i a 0.000 0.164 0.167 0.147 0.159 0.156 0.019 0.023 0.166 b 0.092 0.000 0.087 0.133 0.062 0.073 0.023 0.014 0.048 c 0.125 0.047 0.000 0.074 0.048 0.040 0.022 0.014 0.036 d 0.155 0.136 0.166 0.000 0.135 0.136 0.026 0.012 0.137 e 0.145 0.135 0.090 0.082 0.000 0.116 0.019 0.014 0.104 f 0.135 0.109 0.134 0.137 0.116 0.000 0.014 0.015 0.116 g 0.022 0.018 0.016 0.014 0.017 0.016 0.000 0.018 0.017 h 0.014 0.018 0.019 0.017 0.016 0.017 0.022 0.000 0.008 i 0.167 0.135 0.125 0.147 0.134 0.131 0.012 0.014 0.000 in the next phase, all the relations between each pair of the criteria are expressed by the matrix of direct influences. elements of matrix of full direct/indirect influence of criteria were derived by the equation 3 and the matrix is shown in table 3.   1 t d i d    in (3) , , 1, 2,... ij nxn t t i j n    t – matrix of full influence, i – unit matrix of influence, ij t element of the matrix of full influence. table 3: matrix of full influence of criteria k a b c d e f g h i a 0.383 0.486 0.509 0.470 0.451 0.448 0.083 0.072 0.436 b 0.299 0.194 0.288 0.310 0.236 0.244 0.059 0.042 0.214 c 0.279 0.200 0.164 0.220 0.188 0.180 0.050 0.037 0.170 d 0.484 0.435 0.479 0.313 0.406 0.405 0.083 0.058 0.389 e 0.409 0.376 0.353 0.331 0.233 0.336 0.066 0.051 0.313 f 0.429 0.378 0.416 0.398 0.359 0.254 0.066 0.056 0.343 g 0.070 0.062 0.063 0.058 0.057 0.056 0.009 0.025 0.055 h 0.058 0.057 0.060 0.056 0.051 0.052 0.030 0.006 0.042 i 0.490 0.433 0.443 0.439 0.404 0.401 0.070 0.059 0.268 by comparing the values in the matrix of full influence of criteria with the calculated threshold value it is determined whether the criteria are significant or not. namely, if all the values of one criterion are less than the threshold value, this criterion is not significant for the selection of the aircraft. the threshold value is calculated using the equation 4 and is 0.232. 1 1 n n ij i j t n          (4)  threshold value, n – full number of elements of matrix t. dematel-ahp multi-criteria decision making model for the determination and evaluation… 99 table 4. comparison of the elements of matrix of full influence of criteria with the threshold values of criteria k a b c d e f g h i a 0.151 0.254 0.277 0.238 0.219 0.216 -0.149 -0.160 0.204 b 0.067 -0.038 0.056 0.078 0.004 0.012 -0.173 -0.190 -0.018 c 0.047 -0.032 -0.068 -0.012 -0.044 -0.052 -0.182 -0.195 -0.062 d 0.252 0.203 0.247 0.081 0.174 0.173 -0.149 -0.174 0.157 e 0.177 0.144 0.121 0.099 0.001 0.104 -0.166 -0.181 0.081 f 0.197 0.146 0.184 0.166 0.127 0.022 -0.166 -0.176 0.111 g -0.162 -0.170 -0.169 -0.174 -0.175 -0.176 -0.223 -0.207 -0.177 h -0.174 -0.175 -0.172 -0.176 -0.181 -0.180 -0.202 -0.226 -0.190 i 0.258 0.201 0.211 0.207 0.172 0.169 -0.162 -0.173 0.036 by observing the obtained results it is concluded that two criteria (g and h) are not significant for the selection of the aircraft (table 4). in the same way attributes of selected criteria that are not relevant for the selection of the aircraft were eliminated (table 5-11). table 5. comparison of the elements of matrix of full influence of attributes of criterion aerodynamics and mechanics of the flight with the threshold values a a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a1 -0.236 -0.144 -0.145 -0.213 -0.201 -0.148 -0.135 -0.133 -0.209 -0.151 a2 -0.162 0.225 0.363 -0.150 -0.148 0.159 0.371 0.421 -0.156 0.360 a3 -0.153 0.436 0.249 -0.140 -0.141 0.316 0.418 0.436 -0.152 0.401 a4 -0.214 -0.134 -0.140 -0.234 -0.208 -0.155 -0.137 -0.128 -0.208 -0.128 a5 -0.224 -0.169 -0.169 -0.221 -0.239 -0.180 -0.166 -0.164 -0.217 -0.167 a6 -0.165 0.383 0.360 -0.149 -0.148 0.135 0.364 0.399 -0.159 0.331 a7 -0.179 0.207 0.169 -0.175 -0.174 0.096 0.082 0.237 -0.182 0.069 a8 -0.165 0.248 0.181 -0.163 -0.166 0.172 0.282 0.160 -0.171 0.244 a9 -0.211 -0.121 -0.120 -0.200 -0.210 -0.142 -0.116 -0.109 -0.233 -0.120 a10 -0.147 0.443 0.425 -0.138 -0.138 0.346 0.445 0.456 -0.147 0.246 the attributes a1, a4, a5 and a9 are eliminated. petrović & kankaraš /decis. mak. appl. manag. eng. 1 (2) (2018) 93-110 100 table 6. comparison of the elements of matrix of full influence of attributes of criterion construction and general systems with the threshold values b b1 b2 b3 b4 b5 b6 b7 b8 b9 b10 b1 0.303 -0.111 -0.091 -0.092 0.390 0.442 0.174 -0.095 -0.072 0.523 b2 -0.073 -0.192 -0.162 -0.161 -0.086 -0.083 -0.106 -0.160 -0.157 -0.068 b3 -0.102 -0.174 -0.194 -0.163 -0.117 -0.114 -0.126 -0.166 -0.166 -0.097 b4 -0.085 -0.163 -0.157 -0.192 -0.103 -0.089 -0.127 -0.162 -0.171 -0.087 b5 0.532 -0.105 -0.084 -0.093 0.220 0.394 0.269 -0.093 -0.088 0.517 b6 0.335 -0.112 -0.099 -0.104 0.285 0.189 0.223 -0.114 -0.107 0.427 b7 0.494 -0.100 -0.078 -0.082 0.389 0.379 0.143 -0.091 -0.084 0.498 b8 -0.081 -0.162 -0.156 -0.156 -0.100 -0.099 -0.119 -0.191 -0.153 -0.078 b9 -0.090 -0.176 -0.171 -0.170 -0.106 -0.095 -0.111 -0.157 -0.192 -0.084 b10 0.518 -0.106 -0.089 -0.092 0.281 0.435 0.314 -0.094 -0.090 0.330 the attributes b2, b3, b4, b8 and b9 are eliminated. table 7. comparison of the elements of matrix of full influence of attributes of criterion propulsion with the threshold values c c1 c2 c3 c4 c5 c1 0.001 0.147 0.051 0.368 0.263 c2 0.056 -0.218 -0.214 0.082 -0.140 c3 0.123 0.002 -0.172 0.184 0.015 c4 -0.070 -0.101 -0.053 -0.094 -0.009 c5 -0.051 -0.008 -0.047 0.090 -0.174 all attributes are accepted. table 8. comparison of the elements of matrix of full influence of attributes of criterion avionics and sensors with the threshold values d d1 d2 d3 d4 d5 d6 d7 d1 -0.083 0.030 -0.001 0.011 0.028 0.070 -0.100 d2 0.006 -0.071 -0.015 -0.013 0.028 0.070 -0.048 d3 0.140 0.147 -0.034 0.146 0.162 0.188 0.099 d4 -0.065 -0.062 -0.095 -0.135 -0.088 -0.011 -0.106 d5 -0.099 -0.053 -0.134 -0.118 -0.144 -0.065 -0.111 d6 0.099 0.115 0.060 0.080 0.097 0.008 0.037 d7 0.019 0.026 -0.008 -0.006 0.028 0.081 -0.109 all attributes are accepted. dematel-ahp multi-criteria decision making model for the determination and evaluation… 101 table 9. comparison of the elements of matrix of full influence of attributes of criterion integrated logistics support with the threshold values e e1 e2 e3 e4 e5 e6 e7 e1 0.114 -0.091 0.360 0.375 0.305 0.348 0.279 e2 -0.079 -0.212 -0.082 -0.085 -0.103 -0.088 -0.125 e3 0.287 -0.121 0.127 0.280 0.206 0.270 0.214 e4 0.139 -0.133 0.117 0.021 0.085 0.077 0.037 e5 0.154 -0.142 0.053 0.113 -0.020 0.064 0.022 e6 0.219 -0.134 0.201 0.187 0.131 0.058 0.102 e7 0.360 -0.102 0.328 0.333 0.281 0.307 0.102 the attribute e2 is eliminated. table 10. comparison of the elements of matrix of full influence of attributes of criterion armament with the threshold values f f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f1 -0.010 0.073 0.171 0.057 -0.179 0.163 0.132 0.029 -0.155 0.122 f2 0.021 -0.023 0.052 0.061 -0.176 0.037 0.094 0.087 -0.174 0.086 f3 -0.052 -0.056 -0.087 -0.044 -0.200 -0.041 -0.036 -0.065 -0.192 -0.034 f4 0.012 0.023 0.030 -0.031 -0.178 0.104 0.096 0.047 -0.181 0.134 f5 -0.174 -0.175 -0.158 -0.173 -0.243 -0.154 -0.148 -0.180 -0.222 -0.168 f6 0.089 0.125 0.193 0.158 -0.164 0.056 0.160 0.078 -0.156 0.217 f7 0.105 0.085 0.152 0.040 -0.182 0.088 0.009 0.014 -0.166 0.076 f8 0.251 0.256 0.298 0.237 -0.141 0.279 0.283 0.080 -0.129 0.253 f9 -0.181 -0.175 -0.170 -0.182 -0.227 -0.170 -0.155 -0.176 -0.242 -0.165 f10 0.189 0.214 0.218 0.151 -0.140 0.240 0.266 0.193 -0.145 0.113 the attributes f5 and f9 are eliminated. table 11. comparison of the elements of matrix of full influence of attributes of criterion economy with the threshold values i i1 i2 i3 i1 0.075 0.275 0.317 i2 0.097 -0.332 -0.041 i3 0.055 -0.108 -0.334 all three attributes are accepted. the evaluation of the selected criteria and attributes of criteria was performed by the ahp method (the analytich hierarchy process). the gathering data was carried out using the questionnaire which was adapted to scale of relative importance (saaty, 1980). using the standard scale, each element of comparasion ij a of matrix a can get one of 17 numerical values from a discrete interval [1/9, 9]. prioritization is conducted using the eigenvector method – ev (saaty, 1980). the criteria and attributes of criteria are pairwise compared by respondents. by calculating the mode of the individual gathered values, a pairwise comparison matrix was created (table12). petrović & kankaraš /decis. mak. appl. manag. eng. 1 (2) (2018) 93-110 102 table 12.pairwise comparison matrix (for criteria) k a b c d e f i a 1 4 4 3 4 3 2 b 0.25 1 2 0.5 0.5 0.333 0.5 c 0.25 0.5 1 0.333 0.5 0.5 0.5 d 0.333 2 3 1 2 3 2 e 0.25 2 2 0.5 1 0.5 0.5 f 0.333 3 2 0.333 2 1 0.333 i 0.5 2 2 0.5 2 3 1 based on values from the pairwise comparison matrix, a normalized pairwise comparison matrix was calculated by the equation 5 (saaty, 1980). 1 ij nij ij j a a a     (5) table 13. normalized pairwise comparison matrix (for criteria) k a b c d e f i a 0.343 0.276 0.250 0.487 0.333 0.265 0.293 b 0.086 0.069 0.125 0.081 0.042 0.029 0.073 c 0.086 0.034 0.063 0.054 0.042 0.044 0.073 d 0.114 0.138 0.188 0.162 0.167 0.265 0.293 e 0.086 0.138 0.125 0.081 0.083 0.044 0.073 f 0.114 0.207 0.125 0.054 0.167 0.088 0.049 i 0.171 0.138 0.125 0.081 0.167 0.265 0.146 from the table 13, the weight values w were calculated by the equation 6, which are shown in table 14. 1 n ij j i a w n     (6) i w weight value, ij a element of normalized pairwise comparison matrix table 14. weight values of criteria ( 0.055cr  ) k a b c d e f i w rank a 0.343 0.276 0.250 0.487 0.333 0.265 0.293 0.321 1 b 0.086 0.069 0.125 0.081 0.042 0.029 0.073 0.072 6 c 0.086 0.034 0.063 0.054 0.042 0.044 0.073 0.057 7 d 0.114 0.138 0.188 0.162 0.167 0.265 0.293 0.189 3 e 0.086 0.138 0.125 0.081 0.083 0.044 0.073 0.090 5 f 0.114 0.207 0.125 0.054 0.167 0.088 0.049 0.115 4 i 0.171 0.138 0.125 0.081 0.167 0.265 0.146 0.156 2 dematel-ahp multi-criteria decision making model for the determination and evaluation… 103 it can be noted (table 14) that the highest weight value in the selection of the aircraft for air traffic protection has the criterion of aerodynamics and flight mechanics (a), while the lowest weight value has criterion propulsion (c). checking the consistency of the results was tested by the consistency ratio applying the following equation (pamučar, 2017): cicr ri  (7) where is: ci consistency index. max 1 n ci n     (8) max  maximum eigenvector of the matrix of comparison. this value was calculated as follows: max 1 1 n i i n      (9) i i i b w   (10) value i b was calcualted as follows: 11 12 11 1 2 21 22 2 2 1 2 n n n nn n nn a a ab w b a a a w b wa a a                           (11) ij a represents the value of the element from the pairwise comparison matrix. ri random index, which depends on the number of rows columns of the matrix n (pamučar, 2017). for example, if 2n  , than is 0ri  , if 3n   0.52ri  , if 4n   0.89ri  , if 5n   1.11ri  , if 6n   1.25ri  , if 7n   1.35ri  , if 8n   1.4ri  . if 0.10cr  then the result is consistent. in this case, the consistency ratio is 0.055 and it is lower then 0.1, so the result is consistent (there is no need for corrections of the comparison). the weight values for attributes are determined in the same way. weight values for the attributes of each criterion are shown in the following tables. table 15. weight values for attributes of criterion aerodynamics and mechanics of the flight ( 0.03cr  ) a a2 a3 a6 a7 a8 a10 w1 rank a2 0.185 0.222 0.273 0.222 0.254 0.147 0.217 2 a3 0.046 0.056 0.045 0.037 0.028 0.088 0.050 6 a6 0.092 0.167 0.136 0.148 0.169 0.147 0.143 3 a7 0.061 0.111 0.068 0.074 0.042 0.088 0.074 5 a8 0.061 0.167 0.068 0.148 0.085 0.088 0.103 4 a10 0.554 0.278 0.409 0.370 0.423 0.441 0.413 1 petrović & kankaraš /decis. mak. appl. manag. eng. 1 (2) (2018) 93-110 104 table 16. weight values for attributes of criterion construction and general systems ( 0.01cr  ) b b1 b5 b6 b7 b10 w2 rank b1 0.404 0.412 0.316 0.343 0.490 0.393 1 b5 0.058 0.059 0.053 0.057 0.061 0.057 5 b6 0.134 0.118 0.105 0.086 0.082 0.105 4 b7 0.202 0.176 0.211 0.171 0.122 0.177 3 b10 0.202 0.235 0.316 0.343 0.245 0.268 2 table 17. weight values for attributes of criterion propulsion ( 0.02cr  ) c c1 c2 c3 c4 c5 w3 rank c1 0.162 0.222 0.222 0.176 0.147 0.186 2 c2 0.081 0.111 0.148 0.118 0.117 0.115 3 c3 0.054 0.056 0.074 0.118 0.084 0.077 4 c4 0.054 0.056 0.037 0.059 0.065 0.054 5 c5 0.649 0.556 0.519 0.529 0.587 0.568 1 table 18. weight values for attributes of criterion avionics and sensors ( 0.03cr  ) d d1 d2 d3 d4 d5 d6 d7 w4 rank d1 0.152 0.133 0.170 0.190 0.194 0.100 0.218 0.165 3 d2 0.076 0.067 0.068 0.095 0.129 0.067 0.036 0.077 5 d3 0.304 0.333 0.339 0.286 0.258 0.400 0.327 0.321 1 d4 0.038 0.033 0.057 0.048 0.032 0.067 0.036 0.044 7 d5 0.051 0.033 0.085 0.095 0.065 0.067 0.055 0.064 6 d6 0.304 0.200 0.170 0.143 0.194 0.200 0.218 0.204 2 d7 0.076 0.200 0.113 0.143 0.129 0.100 0.109 0.124 4 table 19. weight values for attributes of criterion integrated logistics support ( 0.03cr  ) e e1 e3 e4 e5 e6 e7 w5 rank e1 0.374 0.261 0.357 0.350 0.329 0.462 0.355 1 e3 0.124 0.087 0.036 0.100 0.082 0.077 0.084 5 e4 0.075 0.174 0.071 0.100 0.055 0.058 0.089 4 e5 0.053 0.043 0.036 0.050 0.041 0.058 0.047 6 e6 0.187 0.174 0.214 0.200 0.164 0.115 0.176 3 e7 0.187 0.261 0.286 0.200 0.329 0.231 0.249 2 dematel-ahp multi-criteria decision making model for the determination and evaluation… 105 table 20. weight values for attributes of criterion armament ( 0.04cr  ) f f1 f2 f3 f4 f6 f7 f8 f10 w6 rank f1 0.032 0.024 0.031 0.029 0.025 0.023 0.053 0.024 0.030 8 f2 0.065 0.049 0.125 0.114 0.041 0.023 0.053 0.043 0.064 5 f3 0.065 0.024 0.063 0.114 0.062 0.034 0.074 0.071 0.063 6 f4 0.065 0.024 0.031 0.057 0.062 0.034 0.074 0.071 0.052 7 f6 0.161 0.146 0.125 0.114 0.124 0.136 0.122 0.107 0.130 3 f7 0.097 0.146 0.125 0.114 0.062 0.068 0.074 0.043 0.091 4 f8 0.226 0.341 0.313 0.286 0.373 0.341 0.368 0.428 0.334 1 f10 0.290 0.244 0.188 0.171 0.249 0.341 0.184 0.214 0.235 2 table 21. weight values for attributes of criterion economy ( 0.02cr  ) i i1 i2 i3 w7 rank i1 0.621 0.600 0.692 0.638 1 i2 0.310 0.300 0.231 0.280 2 i3 0.069 0.100 0.077 0.082 3 3. results on the basis of the first two phases of the research, less significant criteria and attributes are eliminated. these criteria are: reconnaissance equipment and concept of pilot training. in the same way attributes of criterion aerodynamics and mechanics of the flight are eliminated: weight, length of take off landing, range and ceiling of flight and ability of supercruise. eliminated attributes of criterion construction and general systems are: obstacle avoidance system, gps terrain-following, voice command system, durability and ability of aerial refueling. also, attribute convenience of maintenance of criterion integrated logistics support is eliminated. the following attributes of criterion armament are eliminated: under-fuselage hardpoints and bombs and other air to surface armament. other attributes of selected criteria are significant for selection the air traffic protection aircraft. their determination was the objective of the first part of the research. determining differences in significance between criteria and attributes of criteria was the objective of the second part of the research (using the ahp method). prioritization of the criteria determined that the most significant criterion (table 14 and figure 2) is aerodynamics and mechanics of the flight (rank 1, weight 0.321), while the least significant is the criterion propulsion (rank 7; 0.057). attributes are also evaluated by prioritizing. the the most significant attribute of the criterion aerodynamics and mechanics of the flight (table 15) is reaction time, and the least significant attribute is acceleration performance. furthermore, for criterion construction and general systems the most significant attribute is wing mechanization and flight control system, and least significant is oxygen system. the most significant attribute of the criterion propulsion (table 17) is maintenance system and the least significant attribute is thermal emission. for the criterion avionics and sensors the highest weight value (table 18) has fire-control radar and the lowest weight value has electronic warfare equipment. for the integrated logistics support the most significant is reliability of aircraft and the least significant is ability of maintenance staff (table19). the air – to – air missiles and petrović & kankaraš /decis. mak. appl. manag. eng. 1 (2) (2018) 93-110 106 rockets are most significant for the criterion armament and the least significant attribute for the same criterion is capacity of locations for mounting armament (table 20). prioritization for the criterion economy is determined (table 21) that highest weight value has acquisition cost, in the middle is life cycle costs and the lowest weight value has aircraft disposal costs. for each weight value calculation, the consistency of the results was checked. since all consistency ratio were less than 0.1, it is concluded that there is consistency for all results of prioritization. considering all aforementioned, it is concluded that the objective of the research is achieved and the general hypothesis is proven and the model is proposed (figure 2). figure 2. proposed model for selection of the air traffic protection aircraft with weight values for criteria and attributes of criteria dematel-ahp multi-criteria decision making model for the determination and evaluation… 107 4. discussion on the basis of the results it can be concluded that there are criteria and attributes which are significant for equipping the army with the combat aircraft (vlačić, 2012), but which are irrelevant in peacetime for the purpose of air traffic protection in the case of airspace violation. for example, the most significant criterion for the combat aircraft is aerodynamics and mechanics of the flight, but also because of the multi roles, very significant is criterion reconnaissance equipment. the need for equipping two or three squadrons with the combat aircraft is the reason for the significance of the criterion concept of pilot training. despite the aforementioned, the criterion economy is less significant for equipping with the combat aircraft than in the case of equipping with the air traffic protection aircraft (vlačić, 2012). this difference as well as the difference in the significance of the selected criteria and attributes is a consequence of the overall picture of the organization and functioning of air traffic over the territory of the republic of serbia. small area, elongated form of territory, high frequency of traffic, geostrategic position, number of air routes, financial capabilities of the country, availability and classes of airports are only several factors that have an impact on the determination and evaluation of criteria for selecting the aircraft (for example, it is easy to notice that due to the form of the territory and the area of the country, the reaction time is very significant for aircraft the time required by duty aircraft to take prescribed measures on the ground after receipt of an airspace endagering warning, to take off to be navigated and to intercept an aviation threat). the differences in the significance of the factors are also a consequence of the fact the combat aircraft conducts a wide range of tasks such as: air-to-air combat, aerial reconnaissance, forward air control, electronic warfare, air interdiction, suppression of enemy air defence and close air support. these missions would be conducted by aircraft in extremely specific conditions. therefore, for selection of the aircraft are significant the following four overall evaluation criteria: affordability, mission capability, operational readiness and operational safety (mavris & delaurentis, 1995). it might be concluded that there are a lot of factors which impact on the determination and evaluation of criteria and attributes of criteria for selecting the air traffic protection aircraft. also, those criteria are specific due to mission that is conducted by air traffic protection aircraft, although it is essentially the aircraft designed for use both in peacetime and wartime. 5. conclusion air traffic is not immune to numerous security threats, including aviation threats. in the modern age, the possibility of occurrence of the airspace violation and other aviation threats is a reality. therefore, the protection of air traffic from aviation threats is a very important security mission all around the world. in small countries, this task is conducted by their own aviation or aviation of some other countries. there is no doubt that for each country it is better to conduct this mission with its own aviation. it is also important to know that the aircrafts whose mission is to protect the air traffic from aviation threats have to meet the relevant international standards and technological criteria. bearing in mind aforementioned and price of modern military aircrafts, the small countries usually make the decision to equip only petrović & kankaraš /decis. mak. appl. manag. eng. 1 (2) (2018) 93-110 108 a few aircrafts for the conducting of this mission. therefore, it is necessary to determine very precisely according to the criteria of equipping, which depend on the set of factors mentioned in this paper and because of it precise determination and evaluation of the criteria for the selection of the air traffic protection aircraft on the example of the republic of serbia was the subject of this research. for the purposes of this paper, traditional multi-criteria decision making methods are used and the model is proposed that can be applied in practice (and for the purpose of other countries that have simmilar teritorial characterics). by determining the mutual influence of the criteria (attributes) using the dematel method, the final definition of the criteria (atributtes) and their weights are calculated by ahp. the applied methods, the obtained results and the proposed model make this research scientifically and methodologically justified. furthermore, it is possible to propose similar models for the needs of equipping the system of defence with other types of equippment. above mentioned makes this research practical justified. in the future research, it is possible to select a specific aircraft using some other the multi-criteria decision making methods (topsis, mabac, vikor, mairca, etc.). also, the models for designing certain technological solutions according to user requirements can be created. furthermore, the application of similar models is possible for the purpose of implementing organizational changes in some organizational systems. future research can also focus on the development of similar models using traditional methods in combination with methods that take into account uncertainty – fuzzy numbers tipe one-two or rough or interval-valued rough fuzzy numbers, intuitionistic fuzzy numbers, etc (vahdani, tavakkoli-moghaddam, meysam mousavi & ghodratnama, 2013; sizong & tao, 2016; zywica, stachowiak, & wygralak, 2016, pamučar, petrović & cirović, 2018), which would significantly improve the field of multi-criteria decision making. acknowledgement: the paper is a part of the research done within the project vadh/3/17-19. the authors would like to thank to the the ministry of defence, and the project manager. references abbass, h., tang, j., amin, r., ellejmi, m., & kirby, s. (2014). augmented cognition using real -time eeg-based adaptive strategies for air traffic control. in stafford, s.m (ed.): proceedings of the human factors and ergonomics society 58th annual meeting (pp. 230-234). chicago. illinois: human factors and ergonomics society chen, j., chen, l., & sun d. (2017). air traffic flow management under uncertainty using chance constrained optimization. transportation research part b, 102, 124141. doi.org/10.1016/j.trb.2017.05.014 čokorilo, o., gvozdenović, s., mirosavljević, p., & vasov, l. (2010). multi attribute decision making: assessing the technological and operational parameters of an aircraft. transport, 25 (4), 352-356. doi.org/10.3846/transport.2010.43 dagdeviren, m., yavuz, s., & kilinc, n. (2009). weapon selection using the ahp and topsis methods under fuzzy environment. expert systems with applications, 36, 8143–8151. doi.elsevier.com/locate/eswa doi:org/10.1016/j.arcontrol.2016.09.012 balakrishna, h. (2016). control and optimization algorithms for air transportation systems. annual reviews in control, 41, 39-46. doi: dematel-ahp multi-criteria decision making model for the determination and evaluation… 109 org/10.1016/j.arcontrol.2016.04.019 durso, f., & manning, c. (2008). air traffic control. reviews of human factors and ergonomic,. 4 (1), 195-244. doi: org/10.1518/155723408x342853 gordić, m., & petrović, i. (2014). raketni sistemi u odbrani malih država. beograd: mc odbrana. [in serbian] kirby, m. r. a. (2001). methodology for technology identification, evaluation, and selection in conceptual and preliminary aircraft design. atlanta: georgia institute of technology. mavris, d., & delaurentis, d. (1995). an integrated approach to military aircraft selection and concept evaluation. the 1st aiaa aircraft engineering, technology, and operations congress, los angeles, (1-11). american institute of aeronautics and astronautics menon, p.k., sweriduk,g.d., & bilimoria, d. (2004). new approach for modeling, analysis, and control of air traffic flow. journal of guidance, control, and dynamics, september, 27 (5), 737-744. doi: org/10.2514/1.2556 menon, p.k., & park, s.g. (2016). dynamics and control technologies in air traffic management. annual reviews in control, 42, 271-284. doi.org/10.1016/j.arcontrol.2016.09.012 moghaddam, n. b., sahafzadeh, m., alavijeh, a. s., yousefdehi, h., & hosseini, s. h. (2010). strategic environment analysis using dematel method thorogh systematic approach: case study of energy research institute in iran. management science and engineering, 4 (4), 95-105 pamučar, d. (2017). operaciona istraživanja. beograd: rabek. [in serbian] pamučar, d., petrović, i., & cirović, g. (2018). modification of the best-worst and mabac methods: a novel approach based on interval-valued fuzzy-rough numbers. expert systems with applications, 91, 89-106. doi.org/10.1016/j.eswa.2017.08.042 petrović, d., cvetković, i., kankaraš, m., & kapor n. (2017). objective technology selection model: the example of complex combat systems. international journal of scientific & engineering research, 8 (3), 105 – 114. petrović, i. (2013) konceptualni model sistema protivvazduhoplovne odbrane vojske srbije. beograd: univerzitet odbrane. [unpublished doctoral dissertation] [in serbian] petrović, i., kankaraš, m., & cvetković k. (2015). significance and prospects of the development of defence system. vojno delo 67 (6), 86-98. doi: 10.5937/vojdelo1506086p petrović, i., kankaraš, m., & gordić, p. (2014). model proračuna dugoročne finansijske održivosti izvođenja operacije kontrole i zaštite vazdušnog prostora. vojno delo, 66 (6), 219-226. doi: 10.5937/vojdelo1404219p [in serbian] saaty, t.l. (1980). the analytic hierarchy process. new york: mcgraw-hill. sizong, g., & tao, s. (2016). interval-valued fuzzy number and its expression based on structured element, advances in intelligent and soft computing, 62, 1417-1425. petrović & kankaraš /decis. mak. appl. manag. eng. 1 (2) (2018) 93-110 110 steiner, s., mihetec, t., & božičević, a., (2010). prospects of air traffic management in south eastern europe. promet-traffic & transportation, scientific journal on traffic and transportation research, 22 (4), 293302. doi.org/10.7307/ptt.v22i4.194 sumrit, d., & anuntavoranich, p. (2013). using dematel method to analyze the casual relations on technological innovation capability evaluation factors in thai technology-based firms. international transaction journal of engineering, management, & applied sciences & technologies, 4 (2), 81-103. vahdani, b., tavakkoli-moghaddam, r., meysam mousavi, s., & ghodratnama. a. (2013). soft computing based on new interval-valued fuzzy modified multi-criteria decision-making method, applied soft computing, 13, 165–172. doi.org/10.1016/j.asoc.2012.08.020 vlačić, s. (2012). definisanje kriterijuma za izbor višenamenskog borbenog aviona za potrebe vazduhoplovstva i protivvazduhoplovne odbrane vojske srbije. beograd: univerzitet odbrane. [unpublished doctoral dissertation] [in serbian] zywica, p., stachowiak, a. & wygralak, m. (2016). an algorithmic study of relative cardinalities for interval-valued fuzzy sets. fuzzy sets and systems, 294, 105–124. doi.org/10.1016/j.fss.2015.11.007 plane thermoelastic waves in infinite half-space caused decision making: applications in management and engineering vol. 3, issue 2, 2020, pp. 119-130 issn: 2560-6018 eissn: 2620-0104 doi: https://doi.org/10.31181/dmame2003119m * corresponding author. e-mail addresses: km.18ma1103@phd.nitdgp.ac.in (k. mohanta), arindam84nit@gmail.com (a. dey), anita.buie@gmail.com (a. pal) a study on picture dombi fuzzy graph kartick mohanta 1*, arindam dey 2 and anita pal 1 1 department of mathematics, national institute of technology durgapur, india 2 department of computer sciences and engineering, saroj mohan institute of technology, hooghly, india received: 3 june 2020; accepted: 4 september 2020; available online: 20 september 2020. original scientific paper abstract: the picture fuzzy graph is a newly introduced fuzzy graph model to handle with uncertain real scenarios, in which simple fuzzy graph and intuitionistic fuzzy graph may fail to model those problems properly. the picture fuzzy graph is used efficiently in real world scenarios which involve several answers to these types: yes, no, abstain and refusal. in this paper, the new idea of dombi picture fuzzy graph is introduced. we also describe some operations on dombi picture graphs, viz. union, join, composition and cartesian product. in addition, we investigated many interesting results regarding the operations. the concept of complement and isomorphism of picture dombi fuzzy graph are presented in this paper. some important results on weak and co-weak isomorphism of picture dombi fuzzy graph are derived. key words: t-norm, s-norm, picture dombi fuzzy graph, union, composition, cartesian product, join, complement, homomorphism, isomorphism. 1. introduction menger (1942) presented triangular norms (t-norms) and triangular co-norms (tconorms) in the framework of probabilistic metric spaces which were later defined and discussed by schweizer and berthold (2011). alsina et al. (1983) proved that t-norms and t-conorms are standard models for intersecting and unifying fuzzy sets, respectively. since then, many other researchers have presented various types of t-operators for the same purpose (hamacher, 1978). zadeh’s conventional t-operators, min and max, have been used in almost every application of fuzzy logic particularly in decision-making processes and fuzzy graph theory. it is a well-known fact that from theoretical and experimental aspects other t-operators may work better in some situations, especially in the context of decision-making processes. for example, the product operator may be preferred to the min operator (dubois et al., 2000). for the selection of appropriate tmailto:km.18ma1103@phd.nitdgp.ac.in mailto:arindam84nit@gmail.com mailto:anita.buie@gmail.com mohanta et al./decis. mak. appl. manag. eng. 3 (2) (2020) 119-130 120 operators for a given application, one has to consider the properties they possess, their suitability to the model, their simplicity, their software and hardware implementation, etc. as the study on these operators has widened, multiple options are available for selecting t-operators that may be better suited for given research. there are various reallife problems that we cannot explain with the concept of fuzzy set theory. for solving these kinds of problem, k. atanassov (1986) proposed the idea of an intuitionistic fuzzy set (ifs). in ifs we consider membership function and non-membership function such that their sum is lying in [0; 1]. in ifs theory, the idea of neutrality membership value is not considering. in many real-life situations, the neutral membership degree is needed, like a democratic election station. human beings generally give opinions having more answers of the type: yes, no, abstain and refusal. for example, in a democratic voting system, 1000 people participated in the election. the election commission issues 1000 ballot paper and one person can take only one ballot for giving his/her vote and a is only one candidate. the results of the election are generally divided into four groups came with the number of ballot papers namely “vote for the candidate (500)”, “abstain in the vote (200)”, “vote against a candidate (200)” and “refusal of voting (100)”. the “abstain in the vote” describes that ballot paper is white which contradicts both “vote for the candidate” and “vote against a candidate” but it considers the vote. however, “refusal of voting” means bypassing the vote. this type of real-life scenarios cannot be handled by intuitionistic fuzzy set. if we use intuitionistic fuzzy sets to describe the above voting system, the information of voting for non-candidates may be ignored. to solve this problem, cuong and kreinovich (2014) proposed the concept of picture fuzzy set which is a modified version of the fuzzy set and intuitionistic fuzzy set. picture fuzzy set (pfs) allows the idea degree of positive membership, degree of neutral membership and degree of negative membership of an element. graph theory is an important mathematical tool for handling many real-world problems. graph theory has various application in different areas like computer science, social sciences, economics, physics, system analysis, chemistry, neural networks, electrical engineering, control theory, transportation, architecture, and communication. kaufmann (1975) introduces the basic concept of fuzzy graph theory and after that rosenfeld (1975) describes more idea on the fuzzy graph-theoretic concept. krassimir t atanassov introduces the concept of intuitionistic graph theory. shovan dogra (2015) describes different types of product of fuzzy graphs. havare, özge çolakoğlu, n.d. discussed on the coronary product of two fuzzy graphs. in this paper we present the concept of picture dombi fuzzy graph (pdfg) and discussed the operations like union, join, composition, cartesian product, h-morphism, isomorphism, complement of dpfg’s. we also introduce some theorems and examples on pdfg’s. 2. preliminaries t-norm a t-norm is a binary mapping : [0,1] [0,1] [0,1]t   which is satisfies the following conditions: , , , [0,1]a b c d  1. (boundedness property) (0, 0) 0, ( ,1) (1, )t t a t a a   ; 2. (monotonicity property) ( , ) ( , )t a b t c d , if a c and b d ; 3. (commutativity property) ( , ) ( , )t a b t b a ; 4. (associativity property) ( , ( , )) ( ( , ), )t a t b c t t a b c . a study on picture dombi fuzzy graph 121 t-conorm or s-norm a t-conorm is a binary mapping : [0,1] [0,1] [0,1]s   which is satisfies the following conditions: , , , [0,1]a b c d  (boundedness property) (1,1) 1s  , ( , 0) (0, )s a s a a  ; (monotonicity property) ( , ) ( , )s a b t c d , if a c and b d ; (commutativity property) ( , ) ( , )s a b s b a ; (associativity property) ( , ( , )) ( ( , ), )s a s b c s s a b c . hamacher norm hamacher define t-norm and s-norm as follows: , [0,1]a b  (t-norm) ( , ) (1 )( ) ab t a b a b ab       , 0  . (s-norm) ( 1) ( , ) 1 ab a b s a b ab        , 1   . dombi norm the dombi norm is given by , [0,1]a b  (t-norm) 1 1 ( , ) 1 1 1 [( ) ( ) ] t a b a b a b         ; (s-norm) 1 1 ( , ) 1 1 1 [( ) ( ) ] s a b a b a b            . remark 1: if we put 1  in dombi t-norm, we have ( , ) ab t a b a b ab    , , [0,1]a b  . if we put 1  in dombi s-norm, we have 2 ( , ) 1 a b ab s a b ab     , , [0,1]a b  . fuzzy set let x be a universal set. a fuzzy set m of x is the collection of elements  in x s. t., ( ) [0,1]t   . here t is called a membership function of m i.e., : [0,1]t x  . fuzzy graph a f-graph of the graph ( , ) g g g v e  is a pair ( , )g    , where : [0,1]v  is a fuzzy set on g v and : [0,1] g g v v   is a fuzzy relation on g v s. t., ( , ) ( ) ( )x y x y     , ( , ) g g x y v v   (zadeh, 1965). picture fuzzy set (pfs) let be an universal set. a pfs is defined as follows { , ( ), ( ), ( ) : 0 ( ) ( ) ( ) 1, }                     . here : [0,1]  , : [0,1]  and : [0,1]  are called positive membership degree, neutral membership degree and negative membership degree respectively. for all   , 1 ( ( ) ( ) ( ))          is called refusal function of  in . mohanta et al./decis. mak. appl. manag. eng. 3 (2) (2020) 119-130 122 picture fuzzy relation (pfr) let and be two universal sets. a pfr is subset of  s. t., { ( , ), ( , ), ( , ), ( , ) : 0 ( , ) ( , ) ( , ) 1, ( , ) }                               , where : [0,1]   , : [0,1]   and : [0,1]   are called positive membership function, neutral membership function and negative membership function respectively. dombi graph let ( , ) g g g v e be a crisp undirected graph contain no self-loop and parallel edges. also, let : [0,1]v  membership degree on v and : [0,1]v v   be the membership degree on the symmetric fuzzy relation e v v  . then ( , , )v   , is said to be a dombi graph if ( ) ( ) ( , ) ( ) ( ) ( ) ( ) a b a b a b a b           , ( )ab e  . picture dombi fuzzy graph (pdfg) let ( , ) g g g v e be a crisp undirected graph contain no self-loop and parallel edges. also, let ( , , )        s. t., : [0,1]v   , : [0,1]v   and : [0,1]v   be the positive membership degree, neutral membership degree and negative membership degree respectively on the pfs v . we consider ( , , )        s. t., : [0,1]v v    , : [0,1]v v    and : [0,1]v v    as the positive membership degree, neutral membership degree and negative membership degree respectively, in the symmetric pfr e v v  . then ( , , )v   , is said to be a pdfg if 1. ( ) ( ) ( ) ( ) ( ) ( ) ( ) a b ab a b a b                  , ( ) g ab e  ; 2. ( ) ( ) ( ) ( ) ( ) ( ) ( ) a b ab a b a b                  , ( ) g ab e  ; 3. ( ) ( ) 2 ( ) ( ) ( ) 1 ( ) ( ) a b a b ab a b                   , ( ) g ab e  . 3. some operation on pdfg’s union the union of two pdfg's ( , , )v   and ( , , )v   of the graphs ( , ) g g g v e  and ( , ) h h h v e  respectively, is denoted by  and is defined as ( , , ) g h v v       , where ( , , )                   and ( , , )                  s. t., ( )( )      ( ), if g h v v       ( ), if h g v v       a study on picture dombi fuzzy graph 123 ( ) ( ) , if ( ) ( ) ( ) ( ) g h v v                         ( )( )      ( ), if g h v v       ( ), if h g v v       ( ) ( ) , if . ( ) ( ) ( ) ( ) g h v v                         ( )( )      ( ), if g h v v       ( ), if h g v v       ( ) ( ) 2 ( ) ( ) , if 1 ( ) ( ) g h v v                          . ( )( )ab     ( ), if ( ) g h ab ab e e     ( ), if ( ) h g ab ab e e     ( ) ( ) , if ( ) ( ) ( ) ( ) g h ab ab e e ab ab ab ab                   ( )( )ab     ( ), if ( ) g h ab ab e e     ( ), if ( ) h g ab ab e e     ( ) ( ) , if ( ) ( ) ( ) ( ) ( ) g h ab ab ab e e ab ab ab ab                  ( )( )ab     ( ), if ( ) g h ab ab e e     ( ), if ( ) h g ab ab e e     ( ) ( ) 2 ( ) ( ) , if ( ) 1 ( ) ( ) g h ab ab ab ab ab e e ab ab                   . example 1: we consider two pdfg's ( , ) a a a    (shown in fig. 1(a) ) and ( , ) b b b    (shown in fig. 1(b) )of the graphs ( , ) a a a v e  and ( , ) b b b v e  respectively, where { , , } a v x y z , { , , } a e xy yz zx , { , , } b v y z w and { , , } b e yz yw zw . then the union of a and b are shown in figure 1(c). mohanta et al./decis. mak. appl. manag. eng. 3 (2) (2020) 119-130 124 figure 1(a). pdfg a figure 1(b). pdfg b figure 1(c). pdfg a b join the join of two pdfg's ( , , )v   and ( , , )v   of the graphs ( , ) g g g v e  and ( , ) h h h v e  respectively, is denoted by  and is defined as ( , , , ) g h v v e       , where ( , , )                   , ( , , )                   , g h v v   , g h e e e e   (e  set of all edges joining the nodes of g v and ) h v s. t., ( )( ) ( )( ), if g h v v                ( )( ) ( )( ), if g h v v                ( )( ) ( )( ), if g h v v                ( )( )ab     ( )( ), if ( ) g h ab ab e e        ( ) ( ) , if ( ) ( ) ( ) ( ) ( ) a b ab e a b a b                 ( )( )ab     ( )( ), if ( ) g h ab ab e e        ( ) ( ) , if ( ) . ( ) ( ) ( ) ( ) a b ab e a b a b                 ( )( )ab     ( )( ), if ( ) g h ab ab e e        a study on picture dombi fuzzy graph 125 ( ) ( ) 2 ( ) ( ) , if ( ) 1 ( ) ( ) a b a b ab e a b                  theorem 1: the join of two pdfg's is a pdfg. composition the composition of two pdfg's ( , , )v   and ( , , )v   of the graphs ( , ) g g g v e  and ( , ) h h h v e  respectively, is denoted by and is defined as ( , , , ) g h v v e     , where ( , , )               , ( , , )               and 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 1 1 0 1 0 1 {(( , )( , )) : , ( ) } {(( , )( , )) : ( ) , } {(( , )( , )) : ( ) , } g h g h g e s t s t s v t t e s t s t s s e t v s t s t s s e t t          s. t., ( )( , )      ( ) ( ) ( ) ( ) ( ) ( )                      ( )( , )      ( ) ( ) ( ) ( ) ( ) ( )                      ( )( , )      ( ) ( ) 2 ( ) ( ) 1 ( ) ( )                       v  and ( , ) e   , ( ) ( ) ( )(( , )( , )) ( ) ( ) ( ) ( )                               ( ) ( ) ( )(( , )( , )) ( ) ( ) ( ) ( )                               ( ) ( ) 2 ( ) ( ) ( )(( , )( , )) 1 ( ) ( )                                v  and ( , ) e   , ( ) ( ) ( )(( , )( , )) ( ) ( ) ( ) ( )                               ( ) ( ) ( )(( , )( , )) ( ) ( ) ( ) ( )                               ( ) ( ) 2 ( ) ( ) ( )(( , )( , )) 1 ( ) ( )                                ( , ) g e   , and h v   , ( )(( , )( , ))          ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) 2 ( ) ( ) ( )                                               ( )(( , )( , ))         ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) 2 ( ) ( ) ( )                                                mohanta et al./decis. mak. appl. manag. eng. 3 (2) (2020) 119-130 126 ( )(( , )( , ))         ( ) ( ) ( ) 2 ( ) ( ) 2 ( ) ( ) 2 ( ) ( ) 4 ( ) ( ) ( ) 1 ( ) ( ) ( ) ( ) ( ) ( ) 2 ( )                                                                                        example 2: we consider two pdfg's ( , ) a a a    and ( , ) b b b    of the graphs ( , ) a a a v e  and ( , ) b b b v e  respectively, where { , , } a v x y z , { , } a e xy yz , { , } b v a b and { } b e ab . then the composition of a and b are shown in fig. 2(c). figure 2(a). pdfg a figure 2(b). pdfg b figure 2(c). pdfg a b cartesian product the cartesian product of two pdfg's ( , , )v   and ( , , )v   of the graphs ( , ) g g g v e  and ( , ) h h h v e  respectively, is denoted by  and is defined as ( , , ) g h v v       , where ( , , )                   and ( , , )                   s. t., ( , ) v v    , ( ) ( ) ( )( , ) ( ) ( ) ( ) ( )                             ( ) ( ) ( )( , ) ( ) ( ) ( ) ( )                             ( ) ( ) 2 ( ) ( ) ( )( , ) 1 ( ) ( )                              v  and ( , ) e   , a study on picture dombi fuzzy graph 127 ( ) ( ) ( )(( , )( , )) ( ) ( ) ( ) ( )                               ( ) ( ) ( )(( , )( , )) ( ) ( ) ( ) ( )                               ( ) ( ) 2 ( ) ( ) ( )(( , )( , )) 1 ( ) ( )                                v  and ( , ) e   , ( ) ( ) ( )(( , )( , )) ( ) ( ) ( ) ( )                               ( ) ( ) ( )(( , )( , )) ( ) ( ) ( ) ( )                               ( ) ( ) 2 ( ) ( ) ( )(( , )( , )) 1 ( ) ( )                                ( , )( , ) ( )v v e       , ( )(( , )( , )) 0          , ( )(( , )( , )) 0          , ( )(( , )( , )) 0          . remark 1: the cartesian product of two pdfg's is not necessarily a dfg. complement of a pdfg let ( , , ) g v   be a pdfg of the graph ( , ) g g g v e . then the complement of is represented as ( , , ) c c c g v   and is defined as follows: c      , c      and c      ( ) c ab  ( ) ( ) , if ( ) 0 ( ) ( ) ( ) ( ) a b ab a b a b                   ( ) ( ) ( ), if 0 ( ) 1 ( ) ( ) ( ) ( ) a b ab ab a b a b                       ( ) c ab  ( ) ( ) ( ) , if ( ) 0 ( ) ( ) ( ) b b a ab a b a                   ( ) ( ) ( ) ( ), if 0 ( ) 1 ( ) ( ) ( ) b a b ab ab a b a                       ( ) c ab   ( ) ( ) 2 ( ) ( ) , if ( ) 0 1 ( ) ( ) a b a b ab a b                   ( ) ( ) 2 ( ) ( ) ( ) , if 0 ( ) 1 1 ( ) ( ) a b a b ab ab a b                        mohanta et al./decis. mak. appl. manag. eng. 3 (2) (2020) 119-130 128 example 3: we consider the pdfg ( , ) a a    of the graph ( , ) g g g v e  where { , , } g v x y z , { } g e yz . then complement c of shown in fig. 3(a) and fig. 3(b) respectively. figure 3(a). pdfg figure 3(b). c theorem 2: let ( , , ) g v   be a pdfg of the graph ( , ) g g g v e . then ( ) c c  . homomorphism, isomorphism, weak isomorphism, co-weak isomorphism let us consider two pdfg's ( , , )v   and ( , , )v   of the graphs ( , ) g g g v e  and ( , ) h h h v e  , where ( , , )        , ( , , )        , ( , , )        and ( , , )        . (homomorphism) a mapping :  is said to be a homomorphism, if g v  ( ) ( ( ))        , ( ) ( ( ))        and ( ) ( ( ))        ; ( ) g ab e  ( ) ( ( ))ab ab      , ( ) ( ( ))ab ab      and ( ) ( ( ))ab ab      . (isomorphism) a mapping :  is said to be an isomorphism, if g v  ( ) ( ( ))        , ( ) ( ( ))        and ( ) ( ( ))        ; ( ) g ab e  ( ) ( ( ))ab ab      , ( ) ( ( ))ab ab      and ( ) ( ( ))ab ab      . if and are isomorphism, then we write  . (weak-isomorphism) a mapping :  is said to be a weak isomorphism, if  homomorphism; g v  ( ) ( ( ))        , ( ) ( ( ))        and ( ) ( ( ))        . (co-weak isomorphism) a mapping :  is said to be a co-weak isomorphism, if  is a homomorphism; ( ) g ab e  ( ) ( ( ))ab ab      , ( ) ( ( ))ab ab      and ( ) ( ( ))ab ab      . self-complementary let ( , , ) g v   be a pdfg of the graph ( , ) g g g v e . then is said to be selfcomplementary if c . a study on picture dombi fuzzy graph 129 theorem 3: let ( , , ) g v   be a self-complementary pdfg of the graph ( , ) g g g v e . then 0 0 0 0 0 0 0 0 0 0 0 0 ( ) ( )1 ( ) , 2 ( ) ( ) ( ) ( )s t s t s t s t s t s t                      0 0 0 0 0 0 0 0 0 0 0 0 ( ) ( )1 ( ) 2 ( ) ( ) ( ) ( )s t s t s t s t s t s t                      0 0 0 0 0 0 0 0 0 0 0 0 ( ) ( ) 2 ( ) ( )1 ( ) . 2 1 ( ) ( )s t s t s t s t s t s t                       proof: let be a self-complementary graph. so,  an isomorphism : c   s. t., g v  ( ) ( ( )) c         , ( ) ( ( )) c         ( ) g ab e  , ( ) ( ( )) c ab ab      , ( ) ( ( )) c ab ab      and ( ) ( ( )) c ab ab      . now, we know that, ( ( )) ( ( )) ( ( ) ( )) ( ( ) ( )) ( ( )) ( ( )) ( ( )) ( ( )) c c c c c c c a b a b a b a b a b                               or, ( ) ( ) ( )) ( ( ) ( )) ( ) ( ) ( ) ( ) a b ab a b a b a b                       or, ( ) ( ) ( )) ( ( ) ( )) ( ) ( ) ( ) ( )a b a b a b a b ab a b a b a b                             or, ( ) ( ) 2 ( )) ( ) ( ) ( ) ( )a b a b a b ab a b a b                      or, ( ) ( )1 ( )) 2 ( ) ( ) ( ) ( )a b a b a b ab a b a b                      . in similar way we can proof the remaining two results. this completes the proof. 4. conclusion in this paper, we have introduced the new concept of picture dombi fuzzy graph. we have proposed some operators of union, join, composition and cartesian product of any two dombi picture fuzzy graphs and investigate many interesting properties of dombi picture fuzzy graph. finally, we define the complement picture dombi fuzzy graph and the isomorphic properties on it. the concept of picture dombi fuzzy graphs can be used to model in several areas of expert systems, transportation, artificial neural networks, pattern recognition and computer networks. author contributions: each author has participated and contributed sufficiently to take public responsibility for appropriate portions of the content. funding: this research received no external funding. conflicts of interest: the authors declare no conflicts of interest. mohanta et al./decis. mak. appl. manag. eng. 3 (2) (2020) 119-130 130 references alsina, c., trillas, e., & valverde, l. (1983). on some logical connectives for fuzzy sets theory. journal of mathematical analysis and applications. https://doi.org/10.1016/0022-247x(83)90216-0 atanassov, k. t. (1986). intuitionistic fuzzy sets. fuzzy sets and systems. https://doi.org/10.1016/s0165-0114(86)80034-3 cuong, b. c., & kreinovich, v. (2014). picture fuzzy sets a new concept for computational intelligence problems. 2013 3rd world congress on information and communication technologies, wict 2013. https://doi.org/10.1109/wict.2013.7113099 dogra, s. (2015). different types of product of fuzzy graphs. progress in nonlinear dynamics and chaos, 3(1), 41–56. dubois, d., ostasiewicz, w., & prade, h. (2000). fuzzy sets: history and basic notions. https://doi.org/10.1007/978-1-4615-4429-6_2 hamacher, h. (1978). uber logische verknupfungen unscharfer aussagen und deren zugehörige bewertungsfunktionen. progress in cybernetics and systems research, 3. havare, özge çolakoğlu, and h. m. (n.d.). on corona product of two fuzzy graphs. kaufmann, a. (1975). introduction à la théorie des sous-ensembles flous à l’usage des ingénieurs (fuzzy sets theory). menger, k. (1942). statistical metrics. proceedings of the national academy of sciences of the united states of america, 28(12), 535. rosenfeld, a. (1975). fuzzy graphs††the support of the office of computing activities, national science foundation, under grant gj-32258x, is gratefully acknowledged, as is the help of shelly rowe in preparing this paper. in fuzzy sets and their applications to cognitive and decision processes. https://doi.org/10.1016/b9780-12-775260-0.50008-6 schweizer, berthold, and a. s. (2011). probabilistic metric spaces. courier corporation. zadeh, l. a. (1965). fuzzy sets. information and control. https://doi.org/10.1016/s0019-9958(65)90241-x © 2018 by the authors. submitted for possible open access publication under the terms and conditions of the creative commons attribution (cc by) license (http://creativecommons.org/licenses/by/4.0/). plane thermoelastic waves in infinite half-space caused decision making: applications in management and engineering vol. 4, issue 2, 2021, pp. 126-139. issn: 2560-6018 eissn: 2620-0104 doi: https://doi.org/10.31181/dmame210402126g *corresponding author. e-mail addresses: ghosal.suman987@gmail.com (s.ghosal), swatidey@yahoo.com (s.dey), ppc@metal.becs.ac.in (p.p.chattopadhyay), shu.datt@gmail.com (s.datta), pb_etc_besu@yahoo.com (p. bhattacharyya) designing optimized ternary catalytic alloy electrode for efficiency improvement of semiconductor gas sensors using a machine learning approach suman ghosal1*, swati dey2, partha pratim chattopadhyay3, shubhabrata datta4 and partha bhattacharyya1 1 department of electronics and telecommunication engineering, indian institute of engineering science and technology, shibpur, west bengal, india. 2 department of aerospace engineering and applied mechanics, indian institute of engineering science and technology, shibpur, west bengal, india. 3 national institute of foundry and forge technology, hatia, ranchi 834003, india. 4 department of mechanical engineering, srm institute of science and technology, kattankulathur, india. received: 19 december 2020; accepted: 17 may 2021; available online: 13 june 2021. original scientific paper abstract: catalytic noble metal (s) or its alloy (s) has long been used as the electrode material to enhance the sensing performance of the semiconducting oxide-based gas sensors. in the present paper, optimized ternary metal alloy electrode has been designed, while the database is in pure or binary alloy compositions, using a machine learning methodology is reported for detection of ch4 gas as a test case. pure noble metals or their binary alloys as the electrode on the semiconducting zno sensing layer were investigated by the earlier researchers to enhance the sensitivity towards ch4. based on those research findings, an artificial neural network (ann) model was developed considering the three main features of the gas sensor devices, viz. response magnitude, response time and recovery time as a function of zno particle size and the composition of the catalytic alloy. a novel methodology was introduced by using ann models considered for optimized ternary alloy with enriched presentation through the multi-objective genetic algorithm (ga) wherever the generated pareto front was used. the prescriptive data analytics methodology seems to offer more or less convinced evidence for future experimental studies. designing optimized ternary catalytic alloy electrode for efficiency improvement of semiconductor gas sensors using a machine learning approach 127 keywords: oxide based gas sensor, ternary alloy catalyst design, sensing parameters, artificial neural network, genetic algorithm, multi-objective optimization. 1. introduction noble metals like palladium, platinum, silver, gold etc. were investigated for a long time due to their contribution towards improving the performance of semiconductor gas sensor devices (acharyya et al., 2016). it was found that the catalytic activities of these metals can further be reinforced by judiciously alloying it with a secondary metal (acharyya & bhattacharyya, 2016; roy et al., 2012). such binary alloys, often utilized in the form of electrode, act as the potential adsorption size for the target gas species, either through chemical sensitization or through electronic sensitization (roy et al., 2012). they help in lowering down the activation energy requirement for gas dissociation. due to subsequent spill-over effect, the surface activity of the main sensing layer (i.e., semiconducting oxide) is also significantly enhanced, which often leads to lower operating temperature, better sensitivity and faster response/recovery kinetics of the device (acharyya & bhattacharyya, 2016; roy et al., 2012; quaranta et al., 1999; bhattacharyya et al., 2007; bhattacharyya et al., 2015). moreover, in some cases, the alloying element often retards the degradation of the catalytic metal electrodes, thereby improving the long-term stability of the device dramatically (roy et al., 2012; quaranta et al., 1999; bhattacharyya et al., 2007; bhattacharyya et al., 2015; wollenstein et al., 2003; lee et al., 2003). for example, pd electrode on zno sensing layer was found to offer improved sensitivity, but at the cost of poor stability towards ch4 (bhattacharyya et al., 2008; bhattacharyya et al., 2007; basu et al., 2008). as long-term exposure to ch4 often leads to formation of palladium hydrate which, due to lattice mismatch with zno, degrades the stability of the sensor (particularly at high temperature) (bhattacharyya et al., 2007). further studies revealed that if pd is alloyed with 25-30% of ag, the probability of such hydrate formation is reduced significantly, which leads to better stability of the metal semiconductor junction and the device as a whole (maity et al., 2018). however, not a large variety of such binary alloys on different oxide surface has so far been investigated (bhattacharyya et al., 2008; basu et al., 2008; mishra et al., 2010; bhattacharyya et al., 2008; ghosal et al., 2019). moreover, most of the approaches are based on trial-and-error method which is time consuming, expensive, and even without any guarantee of success. to improve the performance of such catalytic electrode material, computational design of the alloy, before experimentation, is of immense importance to avoid the above limitations. the non-availability of constitutive models for complex materials systems has prompted researchers to rely on data-driven design approaches (datta & chattopadhyay, 2013). however, the artificial intelligence (ai) and machine learning (ml) have been found to be effective tools for the purpose of designing the alloys, using the experimental findings published by the earlier researchers (datta & chattopadhyay, 2013). kumar et al. (2011) used ml techniques on the raw data attained from four different odours/gases, responses of an oxygen plasma treated thick film tin oxide sensor array. pławiak and rzecki (2015) employed similar methods to study the effect of gas concentration on the performance of a sensor. in an earlier work, by the present authors (ghosal et al., 2019) aimed at oxide-based gas sensor to sense methane gas competently, ai based methodology was incorporated ghosal et al./decis. mak. appl. manag. eng. 4 (2) (2021) 126-139 128 magnificently to design ternary catalytic alloy systematically as per the data set of pure or binary alloys. this was the first attempt to design ternary electrode materials using ai. however, in that work, the ternary alloys were designed without any constraint in the combinations (weight percentage) of elements in the alloys or the amount of each element in it. on the contrary, in the present work, a noble approach has been employed using ai techniques to design ternary alloy catalysts with improved performance, where the experience of the earlier researchers in selection of elements and the maximum and the minimum allowable limit on the amount of a particular alloy element was incorporated in the database through restructuring the data, and thus incorporating the system knowledge in the models (the method of data restructuring is explained in database section). the objectives of the present work are to improve the three pivotal performance parameters of the gas sensor device, viz. response magnitude, response time and recovery time, simultaneously. as the objectives (performance parameters of the sensors) are repeatedly contradictory in nature, multi-objective optimization using genetic algorithm (ga) has been employed for designing alloys with conflicting requirement (deb, 2001; goldberg, 2002; dey et al., 2016; datta, 2016). as we have already mentioned above, a model through the data made by the past researchers on binary alloys was established for three different aspects through artificial neural network (ann) (kumar, 2004; anderson, 1995). ann has proven its capability to plot the input-output relationship of compound materials systems (longo et al., 2017; ray et al., 2009) aimed at optimization process. the ann models are used as the objective functions. the ann models as objective functions while using for ga based optimizations techniques in materials structures had been fruitfully designed by the earlier researchers (datta & chattopadhyay, 2013; sinha et al., 2013). while developing the pareto fronts (goldberg, 2002) as a consequence in the optimization procedure comprising non-dominated solutions was established for identifying the nature of combinations and structures to design the optimized alloy with potential to improve the gas sensor performance in a predetermined and tailor-made fashion. 2. problem formulation the traditional trial and error methodology of designing new materials particularly, binary or ternary complex, with improved performance is a timewasting process, which might often lead to a whole sewerage of materials. the concept of designing computational materials using intelligent data analytics techniques can search solutions computationally, which can later be experimentally validated. in the present case, three attributes, viz. response magnitude, response time and recovery time, were used via the measure of performance of the gas sensor. these attributes were described as the compositional utilities of the catalytic metal electrode, particle size of zno, ch4 concentrations and the optimum temperature of sensing (quaranta et al., 1999). the designed catalyst alloy was targeted to overcome the issues related to the slow response and recovery kinetics, and not higher response magnitude. due to this, for enhancing the device performance can be defined as dropping the response time and recovery time and increasing the response magnitude simultaneously. as revealed from the earlier reports, these objects might often be contradictory and that is why multi-objective optimization through the ga was engaged in the present approach. for describing the overhead three aspects, three distinct ann models were developed, were recycled, and used as objective functions for studying the optimization techniques. as multiobjective complications do not lead to single optimum solution, non-dominant solutions were achieved, proposing designing optimized ternary catalytic alloy electrode for efficiency improvement of semiconductor gas sensors using a machine learning approach 129 the best suitable cooperation between the objectives, known as pareto front (dey et al., 2015). these solutions can be analyzed and utilized to design optimized ternary alloy electrode, suitable for enhancing the performance of semiconducting oxidebased gas sensor devices. 3. construction of database as mentioned earlier, the database was generated from experimental results published by the earlier researchers (bhattacharyya et al., 2008; basu et al., 2008; mishra et al., 2010; bhattacharyya et al., 2008; ghosal et al., 2019). primary analysis of the database revealed that the noble metals used as catalytic electrode can be put into three clusters, pure metals, binary alloys with  50-50 ratio of the elements and binary alloys with  75-25 ratio of elements. table 1. the list of inputs parameters for three different output variables with their minimum, maximum, average and standard deviation (bhattacharyya et al., 2008; basu et al., 2008; mishra et al., 2010; bhattacharyya et al., 2008; ghosal et al., 2019). aimed at designing ternary alloy, the database was derived by dividing the compositional parameters (or components) into three different components, based on the weight percent (%) of pt, pd, rh, au and ag while they are used as pure or binary form as an electrode in the device via taking the clue from the experience of alloy development by the earlier researchers (bhattacharyya et al., 2008; basu et al., 2008; mishra et al., 2010; bhattacharyya et al., 2008). among the three components of the elements, component 1 consists of the elements having more than 50 weight% in the alloy including the pure metals, component 2 consists the presence of metals within the range of 30 to 50 weight%, and component 3 has the elements with less than 30 weight% in the alloy. this restructuring of the database was carried out to divide the elements into three groups, as per the stoichiometry maintained by earlier researchers. such division of the alloying elements into three components, made the input variables min max mean standard deviation ch4 conc. (%) 0.01 1.5 0.524754 0.426081395 temperature (oc) 100 350 225.4098 64.54517205 zno particle size (nm) 20 60 57.72131 15.47054314 pt (wt%) 0 100 8.196721 27.65912729 pd (wt%) 0 74 34.42623 31.57660474 rh (wt%) 0 100 22.95082 42.40063924 ag (wt%) 0 70 27.86885 27.30779828 au (wt%) 0 100 6.557377 24.95898275 output variable response magnitude (%) 20 83.6 45.93888 18.74628351 response time (s) 2.69 86 36.11855 18.91098114 recovery time (s) 16 102 55.48555 22.39324979 ghosal et al./decis. mak. appl. manag. eng. 4 (2) (2021) 126-139 130 database suitable for designing ternary alloy(s). after division of the elements, it was observed that for designing ternary alloy (as per their atomic number) three probable arrangements may be made, i.e. pt-pd-ag, au-pd-ag and rh-pd-ag. in this way, the basic components of the three alloy systems could be finalized using the prior understanding of the alloys used for the purpose. in addition to above, three components, zno particles size, optimum temperature for sensing and the concentration of methane (ch4) were also included as input parameters. basically, the three output parameters are response magnitude, response time and recovery time as the performance indicators of a gas sensor device. 4. computational techniques the processing unit of artificial neural network (ann) bears a resemblance to that of a human brain. inputs generated from other processing units are accepted by the multiple layered processing units that constitute the network architecture. the concluding output of the scheme is the productivity that is generated on the completion of processing from different units. the processing and modelling of materials are provided a profoundly sound and innovative approach through the ann’s. ann can recognize the outline of inputs and outputs commencing the earlier encounters and even the present forms without any previous assumption of their characteristics and interconnections, and this is the most important characteristic of ann. when compared with the traditional approaches, the network has the capacity to calculate and determine further difficult relations in the data of material properties. a quantity of inputs (experimental variables), an only output, and an inbetween hidden layer are contained by the network. at individual hidden unit  ih , the weighted amalgamation of the standardized inputs  njx is functioned on a hyperbolic tangent transfer function which is presented in eq. (1), creates definite that each input subsidizes to each and every hidden unit. (1) (1) tanh n i ij j j j h w x            (1) after that, output neuron at that time computes a linear weighted sum of the outputs of the hidden units, as indicated in eq. (2): (2) 2 i i i y w h   (2) in the directly above for both equations, y is the output, n jx defined as normalized inputs, ih defined as the outputs through hidden units, ijw , iw defined as weights, and i and  defined as bias. therefore, which is likely to achieve dissimilar outputs via changing the weights, ijw ((equations (1) and (2)). the optimal values of these weights are achieving through “training” the network on a set of normalized input– output data. for that, the input–output data can be first standardized in the sort of -1 to +1 from the eq. (3):  min max min 2 1 jn j x x x x x      (3) designing optimized ternary catalytic alloy electrode for efficiency improvement of semiconductor gas sensors using a machine learning approach 131 here n jx is denoted by normalized value, jx signifies the input or output variable, minx and maxx are the minimum and the maximum values of the variable, respectively. the network is trained by adjusting the weights ( ijw ) to minimize an error function, which is basically a regularized sum of square errors. this ultimately leads to an optimal description of the input–output relationship. table 2. parameters used in the multi-objective genetic algorithm for resolving the back propagation (bp) algorithm, is intended of determination of weights and biases for a multilayer ann using feed forward connections through the input layer to the hidden layers and then to the output layer. for minimizing the mean square error among the projected output and the preferred output, the algorithm is an iterative gradient lineage algorithm, which is intended. scaled conjugate gradient back propagation algorithm was used for the contemporary work. via determination, the effect of the input parameters on the final things is impossible, as artificial neural network (ann) is a very high compound and non-linear model. (a) (b) (c) figure 1. representative scatter plot showing the prediction by the trained ann models for (a) response magnitude for the three alloy combinations under consideration (b) response time for the three alloy combinations under consideration and (c) recovery time for the three alloy combinations under consideration. crossover probability 0.95 random seed value 0.19 mutation probability 0.05 parental selection strategy tournament selection ghosal et al./decis. mak. appl. manag. eng. 4 (2) (2021) 126-139 132 (a) (b) (c) figure 2. sensitivity studies of the independent variables for all three output parameters, for (a) au-pd-ag, (b) pt-pd-ag, (c) rh-pd-ag alloys. this makes it imperative to conduct a sensitivity study to reveal the complex hidden connection in the ann model. sensitivity analysis would reveal the gross virtual prominence of the parameters on the belongings. researchers have used several approaches of sensitivity study. for this work, the connection weight method was selected (dolden, 2004). to compute the sensitivity, weights of the input-hidden and hidden-output associates in the competent ann models have been used. the philosophy of advancement of classes promoted by charles darwin has inspired the improvement of the unconventional optimization technique recognized as genetic algorithm (ga). selection, crossover and mutation are the main biological principles which are followed via the simple genetic algorithm (sga) (pławiak & rzecki, 2015; singh et al., 2020). in case of selection process, the candidates aimed at next generation which were recognized. through the crossover process via exchanging and transmission of genetic statistics among two parents for the birth of offspring is transported. finally, a small, probabilistic variation in the genetic makeup is made by the mutation process. in another circumstance, one of the objective conflicting in character, then method is termed as multi-objective optimization (moo) (malbašić & đurić, 2020). unlike single objective optimization where a single prime solution evolves, in circumstance of moo, a non-dominated set of solutions, termed as the ‘pareto front’, evolve (dey et al., 2015; milosevic et al., 2021). this enables a decision producer to select the utmost suitable solution out of the numerous alike finest solutions trading off amongst the differing objectives. in the present work, the developed ann models which are describing the three features of the sensor system are recycled as the objective functions and non-dominated sorting genetic algorithm (nsga-ii) code (deb et al., 2002; albu et al., 2019; gharib, 2020; messinis & vosniakos, 2020), while using for the multi-objective optimization. the constraints used for optimization are specified in table 2. 5. results and discussion figure1 displays the performances of some of the developed artificial neural network (ann) models for predicting response magnitude, response time and recovery time for all the three alloy combinations under consideration. the target versus achieved output plots for the ann models show that the performance of most of the ann models are satisfactory. but in some cases, particularly for the response magnitude models, the prediction by the ann models are not as expected. the sensitivity analyses of the variables based on the trained ann models are shown in fig. 2. it is seen that in the pd-ag-au alloy, au has the most significant role in reducing the recovery time. the role of the other alloying elements is not that designing optimized ternary catalytic alloy electrode for efficiency improvement of semiconductor gas sensors using a machine learning approach 133 significant in improving the output parameters of the sensor device. while, the other two ternary alloy systems, pt in pt-pd-ag alloy and pd and rh in pd-rh-au alloy seem to have positive effect in increasing the response magnitude. the multi-objective optimization using genetic algorithm generates non-dominated solutions in the form of ‘pareto front’. the pareto solutions are generated with different combinations of objectives, viz. (i) response magnitude and response time, (ii) response magnitude and recovery time, and (iii) response magnitude, response time and recovery time for designing all three alloy systems for three different zno particle sizes separately. different combinations of objective functions are used for the multi-objective optimization processes to study the role of the variables in the optimum solutions for different conflicting situations. the results from the optimizations taking three objectives together provide the solutions which can be processed for further investigations, even experimental trials, as those consider all the factors required for the improvement of the catalytic alloy performance. some representative results are selected for all three alloy systems and shown as fig. 3 and fig. 4 and tables 3-5. (a) (b) (c) (d) (e) figure 3. the multi-objective optimization results for the au-pd-ag alloy (a) pareto front for response magnitude and response time, b) pareto front for response magnitude and recovery time, (c) pareto front for response magnitude, response time and recovery time, (d) variation of pd in the pareto solution with increasing response magnitude, and (e) variation of ag in the pareto solution with increasing response magnitude for different search processes. fig. 3(a-c) shows the pareto fronts of au-pd-ag alloy system generated from multi-objective optimization of different combinations of objectives for zno particle size of 45 nm. the conflicting nature of the two objectives are clearly visible when response magnitude is maximized with simultaneous minimization of recovery time 60 70 80 90 100 110 120 38 40 42 44 46 48 50 r e s p o n s e m a g n it u d e , % recovery time, s ghosal et al./decis. mak. appl. manag. eng. 4 (2) (2021) 126-139 134 (fig. 3b). the conflict between the objectives of maximizing response magnitude with minimization of response time is not that significant, as evident from the spread of the pareto front (fig. 3a). the same phenomenon can be observed in the pareto surface developed using all three objectives together (fig. 3c). the non-dominated solutions for all three cases of optimization are arranged along with increasing response magnitude and numbered. the maximum and minimum values of the amounts of elements used in the ternary alloys with optimized performance, as shown in the pareto solutions for all three optimization conditions, are described in table 3. it is seen that the au content does not vary significantly for any optimization condition and remains close to 50 wt%. the other two elements, i.e. pd and ag, have varied to a certain extent during tradeoff between the objectives. the variation in pd and ag for the bi-objective optimization of response magnitude and response time are lesser than the other two cases. this is in conformance of the finding of the pareto front (fig. 3a). the variations of the pd and ag in the solutions for three different optimization conditions are plotted in figs. 3(d-e), where increasing number of pareto solution depicts increase in response magnitude. table 3. the ranges of the weight percentages of the elements in the optimum ternary (au-pd-ag) alloys in the pareto solutions of three different optimization process. objectives au (wt%) pd (wt%) ag (wt%) minimum maximum minimum maximum minimum maximum rm –res.t 50.93941 53.87713 33.33568 37.9842 10.20336 15.14193 r.m –rec.t 50 50.53885 31.07032 45.603958 4.375566 18.855211 rm –res.t-rec. t 50.00015 52.65257 30.72845 44.444019 5.47625 18.394596 table 4. the ranges of the weight percentages of the elements in the optimum ternary (pt-pd-ag) alloys in the pareto solutions of three different optimization process. objectives pt (wt%) pd (wt%) ag (wt%) minimum maximum minimum maximum minimum maximum rm –res.t 50 50.00064 30 30.000008 20.09934 20.099768 r.m –rec.t 50.00336 50.00866 30 30.000065 20.09123 20.091375 rm –res.t-rec. t 50.0002 50.10892 30 30.000731 19.98956 20.09416 it is clear from the figure that increase in pd with simultaneous decease in ag increase the response magnitude, with the expense of increase in recovery and response times. the trends of disparity of ag and pd in the pareto solution are same in the bi-objective optimization response magnitude and recovery time and the triobjective optimization for all objectives. this gives clear indication that to achieve a certain performance level, i.e. to achieve a specific tradeoff between the three properties, the user can get the required alloy composition easily from those to pareto solutions. the other pareto solution, developed from optimization of response magnitude and response time, may provide a different solution to some extent. the minimum and the maximum values of the alloying elements in the pareto solutions of designing optimized ternary catalytic alloy electrode for efficiency improvement of semiconductor gas sensors using a machine learning approach 135 pt-pd-ag alloy system for zno particle size of 60 nm are given in table 4. the values clearly indicate that the optimum weight percentages for the three elements are almost same throughout the pareto front, and even for all three types of multiobjective optimizations (a) (b) (c) (d) (e) figure 4. the multi-objective optimization results for the rh-pd-ag alloy (a) pareto front for response magnitude and response time, b) pareto front for response magnitude and recovery time, (c) pareto front for response magnitude, response time and recovery time, (d) variation of pd in the pareto solution with increasing response magnitude, and (e) variation of ag in the pareto solution with increasing response magnitude for different search processes. for all cases a pt0.5pd0.3ag0.2 alloy provides the best performance. this leads to the fact that the three objectives do not have any significant conflict between them, at least within the used search space. some of the important results coming out from the multi-objective optimizations for the rh-pd-ag alloy system for zno particle size 30 nm are shown in fig.4. the pareto fronts of the alloy system generated from three different optimization processes, as before, are given in fig. 4(a-c). table 5. the ranges of the weight percentages of the elements in the optimum ternary (rh-pd-ag) alloys in the pareto solutions of three different optimization process objectives rh (wt%) pd (wt%) ag (wt%) minimum maximum minimum maximum minimum maximum rm –res.t 50 50 36.32003 46.794388 3.304113 13.77707 r.m –rec.t 50 51.16267 32.45868 44.862404 5.232672 17.638802 rm –res.t-rec. t 50 52.85969 33.32817 44.906982 5.137318 15.753006 ghosal et al./decis. mak. appl. manag. eng. 4 (2) (2021) 126-139 136 it is seen in this case that the variations of the properties are quite high in the nondominated solution, which means the objectives are quite conflicting in this alloy. table 5 gives the ranges of the alloying elements required to achieve optimum solutions for improved performance, as shown in the pareto solutions of the three optimizations. here it is seen again that the major element of this alloy (rh) has not varied for all the solutions and remained at around 50 wt%, same as previous two alloy systems. when the variations of other two elements in the solutions are plotted, it is seen that pd content decreases and ag increases with increasing in response magnitude (fig. 4(d-e)). this is contrary to the trend shown in au-pd-ag alloy. the amount of pd for different trade off options the pareto front shows almost similar values in case of optimization of response magnitude and recovery time as well as in case where all three objectives are considered together. but the solutions from the optimization of response magnitude and response time show higher values of pd. in case of addition of ag as the third element, the trend is similar for the two cases as above, whereas the third optimization expectedly shows lower values of ag. the results of the data-driven modelling using ann and multi-objective optimization using ga for designing three alloys, viz. au-pd-ag, pt-pd-ag and rh-ptag, have shown some clear trends in most of the cases. in the earlier work (ghosal et al., 2019), the ga based searching for ternary alloys with improved performance was completely random in nature, which was necessary to gather primary idea about the probable set of ternary alloys. but in this work, a systematic search has been carried out, where the prior knowledge regarding the role of the various elements has been incorporated meticulously in the searching process. in the process the three alloy systems have one base element each (au, pt and rh), one major alloying element (pd) and one addition of comparatively lower amount (ag). this makes the variations in compositions more perceptible, compared to the randomness in variation of composition previously, and thus the decision-making process for experimental trial becomes more logical. during the experimental validation, one should remember that all the results reported here based on data-driven models generated from data collected from secondary source, and hence it is better to consider the general trends of the findings, not a particular solution. 6. conclusion catalytic noble metals in the form of electrode element were found to improve the semiconductor gas sensor device performance. earlier experimental findings revealed that alloy of such noble metals improved the sensing parameters, viz. response magnitude, response time, recovery time, operating temperature and selectivity. however, optimizing all these parameters simultaneously is a crucial challenge, as the requirement for such individual optimization is often mutually conflicting. in the present paper, a systematic approach based on ann and ga was reported to design an optimized ternary alloy electrode from the constructed database of the earlier reported binary ones. it was found with such ga based multiobjective optimization, the gas sensor device performance can be judiciously optimized employing the properly designed ternary alloy, as the electrode material. it was found that, pt-pd-ag (wt % ratio of pt: 50%, pd: 30 %, ag: 20%) offered most promising results for output parameters among the lot. the experimental verification for the present theoretical simulation has been considered as future work. designing optimized ternary catalytic alloy electrode for efficiency improvement of semiconductor gas sensors using a machine learning approach 137 acknowledgement: this publication is a product of the research and development work commenced in the project under the visvesvaraya ph.d. scheme of ministry of electronics and information technology, government of india, being instigated by digital india corporation (formerly media lab asia). author contributions: each author has participated and contributed sufficiently to take public responsibility for appropriate portions of the content. funding: this research received no external funding. conflicts of interest: the authors declare no conflicts of interest. references acharyya, d. & bhattacharyya, p. (2016). alcohol sensing performance of zno hexagonal nanotubes at low temperatures: a qualitative understanding. sens. actuators b, chem. 228, 373–386, doi: 10.1016/j.snb.2016.01.035. acharyya, d., huang, k. y. , chattopadhyay, p. p., ho, h. m. s., fecht j. & bhattacharyya, p. (2016). hybrid 3d structures of zno nanoflowers and pdo nanoparticles as a highly selective methanol sensor. analyst, 141, 2977–2989, doi: 10.1039/ c6an00326e. albu, a., precup, r.e., teban, t.a. (2019). results and challenges of artificial neural networks used for decision-making and control in medical applications. facta universitatis series: mechanical engineering, 17(3), 285 – 308. anderson, j. a. (1995). an introduction to neural networks. mit press, cambridge ma. basu, p. k., bhattacharyya, p., saha, n., saha, h. & basu s. (2008). the superior performance of the electrochemically grown zno thin films as methane sensor. sensors and actuators b, 133 (2), 357–363. bhattacharyya, p., basu, p. k., lang, c., saha, h. & basu s. (2008). noble metal catalytic contacts to sol–gel nanocrystalline zinc oxide thin films for sensing methane. sensors and actuators b, 129 (2), 551–557. bhattacharyya, p., basu, p. k., lang, c., saha, h. & basu, s. (2008). noble metal catalytic contacts to sol gel nanocrystalline zinc oxide thin films for sensing methane. sensors and actuators b, 129 (2), 551–557. bhattacharyya, p., basu, p. k., mukherjee, n., mondal, a., saha, h. & basu s. (2007). deposition of nanocrystalline zno thin films on p-si by novel galvanic method and application of the heterojunction as methane sensor. journal of materials science: materials in electronics, 18 (8), 823–829. bhattacharyya, p., basu, p. k., saha h. & basu, s. (2007). fast response methane sensor using nanocrystalline zinc oxide thin films derived by sol–gel method. sensors and actuators b, 124 (1), 62–67. ghosal et al./decis. mak. appl. manag. eng. 4 (2) (2021) 126-139 138 bhattacharyya, p., bhowmik, b. & fecht, h. j. (2015). operating temperature, repeatability, and selectivity of tio2 nanotube-based acetone sensor: influence of pd and ni nanoparticle modifications. ieee transactions on device and materials reliability, 15(3). datta, s. & chattopadhyay, p. p. soft computing techniques in advancement of structural metals.(2013). international materials reviews, 58 (8), 475-504. datta, s. (2016). materials design using computational intelligence techniques, crc press. deb k . (2001).multi-objective optimization using evolutionary algorithms. john wiley & sons ltd., chichester. deb, k., pratap, a., agarwal, s. & meyarivan t. (2002). a fast and elitist multiobjective genetic algorithm: nsga-ii. ieee trans., evolutionary computation, 6, 182–19 dey, s., ganguly, s. & datta, s. (2015). in silico design of high strength aluminium alloy using multi-objective ga. springer international publishing switzerland, doi https://doi.org/10.1007/978-3-31920294-5_28. dey. s., sultana, n., kaiser, md., s., dey, p. & datta, s. (2016). computational intelligence based design of age-hardenable aluminium alloys for different temperature regimes. materials and design, 92, 522–534. dolden, j., kjoy, m. & gdeath, r. (2004). an accurate comparison of methods for quantifying variable importance in artificial neural networks using simulated data. ecological modelling. 178 (3-4), 389–397. gharib, m. r. (2020). comparison of robust optimal qft controller with tfc and mfc controller in a multi-input multi-output system. reports in mechanical engineering, 1(1), 151-161. ghosal, s., dey, s., bhattacharyya, p., chattopadhyay, p. p., datta, s. (2019). datadriven design of ternary alloy catalysts for enhanced oxide-based gas sensors’ performance. computational materials science, 161, 255-260. goldberg d. e. (2002). genetic algorithms in search, optimization and machine learning. pearson-education, new delhi. kumar, r., das, r. r., mishra, v. n. & dwivedi r. (2011). wavelet coefficient trained neural network classifier for improvement in qualitative classification performance of oxygen-plasma treated thick film tin oxide sensor array exposed to different odors/gases. ieee sensors journal, 11 (4). kumar, s. (2004). neural network-a classroom approach. tata mcgraw-hill publishing company limited, new delhi. lee, s. m. , dyer, d. c. & gardner j w. (2003). design and optimisation of a hightemperature silicon micro-hotplate for nanoporous palladium pellistors. microelectronics journal, 34 (2), 115–126. longo, g. a., zilio, c., ortombina, l. & zigliotto, m. (2017). application of artificial neural network (ann) for modeling oxide-based nanofluids dynamic viscosity. international communications in heat and mass transfer, 83, 8–14. designing optimized ternary catalytic alloy electrode for efficiency improvement of semiconductor gas sensors using a machine learning approach 139 maity, i., acharyya, d., huang, k., chung, p., ho, m. & bhattacharyya, p. (2018). a comparative study on performance improvement of zno nanotubes based alcohol sensor devices by pd and rgo hybridization. ieee transactions on electron devices, 65( 8), 3528-3534. malbašić, s.b., & đurić, s.v. (2019). risk assessment framework: application of bayesian belief networks in an ammunition delaboration project. military technical courier, 67(3), 614-641. messinis, s., & vosniakos, g. c. (2020). an agent-based flexible manufacturing system controller with petri-net enabled algebraic deadlock avoidance. reports in mechanical engineering, 1(1), 77-92. milosevic, t., pamucar, d., & chatterjee, p. (2021). model for selecting a route for the transport of hazardous materials using a fuzzy logic system. military technical courier, 69(2), 355-390. mishra, g. p., sengupta, a., maji, s., sarkar, s. k. & bhattacharyya, p. (2010). the effect of catalytic metal contact on methane sensing performance of nanoporous zno -si heterojunction. international journal on smart sensing and intelligent systems, 3 (2). pławiak, p. & rzecki k. (2015). approximation of phenol concentration using computational intelligence methods based on signals from the metal-oxide sensor array. ieee sensors journal, 15(3). quaranta, f., rella, r., siciliano, p., capone, s., epifani, m. & vasanelli, l. (1999). a novel gas sensor based on sno2/os thin film for the detection of methane at low temperature. sensors and actuators b., 58, 350–355. ray, m., ganguly, s., das, m., datta, s., bandyopadhyay, n. r. & hossain, s. m. (2009). ann based model for in situ prediction of porosity of nanostructured porous silicon. materials and manufacturing processes, 24 (1), 83-87. roy, s., sarkar, c. k. & bhattacharyya, p. (2012). a highly sensitive methane sensor with nickel alloy microheater on micromachined si substrate. solid-state electronics, 2, 76, 84-90. singh, a., ghadai, r.k., kalita, k., chatterjee, p., pamucar, d. (2020). edm process parameter optimization for efficient machining of inconel-718. facta universitatis, series: mechanical engineering. 18(3), 473 490. sinha, a., sikdar, s. (dey), chattopadhyay,p. p. & datta, s. (2013). optimization of mechanical property and shape recovery behavior of ti-(_49 at %) ni alloy using artificial neural network and genetic algorithm. materials and design, 46, 227–234. wollenstein, j., burgmair, m., plescher, g.,sulima, t., hildenbrand, j. & bottner, h. (2003). cobalt oxide based gas sensors on silicon substrate for operation at low temperatures. sensors and actuators b, 93 (1-3), 442–448. © 2018 by the authors. submitted for possible open access publication under the terms and conditions of the creative commons attribution (cc by) license (http://creativecommons.org/licenses/by/4.0/). plane thermoelastic waves in infinite half-space caused decision making: applications in management and engineering vol. 4, issue 1, 2021, pp. 19-32. issn: 2560-6018 eissn: 2620-0104 doi: https://doi.org/10.31181/dmame2104019k * corresponding author. e-mail addresses: ckaramasa@anadolu.edu.tr (ç. karamaşa), edemir@pirireis.edu.tr (e. demir), salih.memis@giresun.edu.tr (s. memiş), selcuk.korucuk@giresun.edu.tr (s. korucuk). weighting the factors affecting logistics outsourcing çağlar karamaşa1*, ezgi demir2, salih memiş3 and selçuk korucuk3 1 anadolu university faculty of business department of business administration, turkey 2 piri reis university faculty of economics and administrative sciences department of management information systems, turkey 3 giresun university bulancak kadir karabaş vocational school department of international trade and logistics, turkey received: 20 september 2020; accepted: 30 october 2020; available online: 9 november 2020. original scientific paper abstract: today, growing and changing competitive conditions, products, and services, free movement of labor, and businesses with the information they develop strategies that create value to obtain a competitive advantage. now, final buyers have the convenience of purchasing the products they demand with the features and conditions they want and at the price they accept. in such an environment, businesses use their supply chain and logistics activities more effectively and efficiently than their competitors. today, achieving a strategic superiority in a global market where the content and quality of the products are the same is only possible by delivering the desired products to the customer at the desired price, at the desired time, in the desired amount, through the right channel, as quickly as possible and without any damage. in such a situation, the desire to focus on the main activities of the enterprises, the need for effective logistics operations, etc. logistics outsourcing has increased rapidly for reasons. businesses can carry out logistics activities requiring expertise thanks to third party logistics (3pl) service providers in the field such as transportation, storage, customs clearance, without investing in logistics. for logistics outsourcing to be beneficial, a correct logistics service provider must be selected under the needs of the business. selecting the right logistics service provider is important in increasing the benefit of outsourcing. in this study neutrosophic ahp was used to prioritize the factors. key words: logistics, outsourcing, neutrosophic ahp. mailto:ckaramasa@anadolu.edu.tr mailto:edemir@pirireis.edu.tr mailto:salih.memis@giresun.edu.tr mailto:selcuk.korucuk@giresun.edu.tr karamaşa et al./decis. mak. appl. manag. eng. 4 (1) (2021) 19-32 20 1. introduction nowadays, with the effects of the competitive environment and the effect of globalization, businesses transfer their work areas other than their main products to the enterprises that carry out their main activity products under the name of outsourcing to reduce costs and focus on their core competencies. in this way, businesses develop that product by focusing on their main products and at the same time, they can carry outside activities more systematically with the help of specialized enterprises. one of the issues where businesses transfer business to a structure outside the business with the help of outsourcing other than their main products is logistics services. businesses that feel the intensity of competition are extremely strong and think that it is difficult to allocate resources and time for logistics elements, get help from logistics companies specialized in their field to carry out one or more of their activities (such as warehousing, transportation, and inventory management) and thus, this situation provides them to become professional. logistics services have been transferred to businesses that provide 3pl services to provide better quality and less cost. at this point, it is important for businesses that will receive 3pl services to be able to select and eliminate the companies that provide this service and to make a decision to agree with the right one. the selection process of the 3pl business has played an important role in determining the performance of logistics activities. if a 3pl business that is not suitable for the business is selected, the quality of the logistics service is low, the efficiency of the logistics activities decreases, the customer and the business are damaged so there is a disconnection between the 3pl business and the customer and the business, etc. serious problems may be encountered. due to these problems, the customer can reduce the reputation of the business in the sector and lead to the loss of trust in the business. in the face of increasing competition with globalization and rising customer expectations, businesses that produce products and services focus on their main abilities and skills by leaving their logistics activities to 3pl. during this process, the relations between the logistics service provider and the enterprises receiving the service have come to the fore regarding service standards. the relationships that businesses establish with logistics service providers contribute to the increasing efficiency of the buyer businesses in operational and financial matters. the purpose of this study is to determine and weigh the factors of outsourcing in logistics companies operating in giresun. for the solution of these extremely complicated and complex elements, neutrosophic ahp as a multi-criteria decision making method was used. in the following part of the study, the literature has been reviewed, and the methods applied in the third part has been explained. in the fourth part, the method of the study has been applied to the problem, and in the last part, the study has been ended by making suggestions regarding the results and future studies. 2. literature review if an enterprise chooses the external source from which it will receive services for the realization of its logistics activities and transfer its activities to it, it will be important that it starts by choosing 3pl enterprises that are specialized in their fields. in recent years, there are many logistics service providers and they provide advantages to their customers by effectively performing logistics activities in today's competitive environment. while choosing the 3pl businesses, the business manager, who will receive logistics services, handles all other units of the business and chooses weighting the factors affecting logistics outsourcing and selecting the most ideal company 21 the 3pl business that will provide the best integration to this process, suitable for the technological infrastructure, offer the most advantages, and will add positive values to the reputation and financial power of the enterprise. when the literature has been examined; dapiran et al. (1996) and bhatnagar et al. (1999) have revealed that service delivery and price are the most important factors for outsourcing criteria. boyson et al. (1999) stated three headings as service costs, customer service, and financial stability as the most important criteria. petroni and braglia (2000) introduced different criteria such as reputation, geographical location, financial stability, experience, technological competence, flexibility, production capacity, and management competence. menon et al. (1998) suggested nine criteria for 3pl evaluation and selection, such as price, timely delivery, error rate, financial stability, creative management, fulfillment or exceeding promises, the presence of senior management, responding to unforeseen problems, meeting performance and quality requirements. prater and ghosh (2005) determined that smes operate abroad with the need to follow their customers in their research. besides, the international communication problem between overseas facilities poses a major obstacle for smes. another finding obtained from the research is that smes engaged in operational commercial activities in european countries, especially in germany, started to be interested in environmental issues such as reverse logistics. bottani and rizzi (2006) developed the fuzzy topsis approach in their studies and determined nine criteria such as compatibility, financial stability, service flexibility, performance, price, physical equipment, and information systems, quality, strategic attitude, trust, and justice to select the most suitable 3pl business. it has been worked on research by arbore and ordanini (2006), in which the outsourcing strategies of smes were examined in terms of the size of the enterprise and the geographical region to which these enterprises are affiliated. in this research, it has been determined that the size of the enterprise is a relative dimension in the choice of outsourcing strategy for smes in terms of internal resources. işıklar et al. (2007) used an integrated approach combining cbr, rbr, and consensus programming to address the 3pl selection problem. the evaluation criteria include cost, quality, technical ability, financial stability, successful track record, service category, personnel qualification, information technology, comparable culture, region, etc. jharkharia and shankar (2007) used the anp approach in their study to select the most suitable 3pl according to four main determinative criteria such as compatibility, cost, quality, and reputation. fu and liu (2007) determined the weights of the criteria with the ahp technique by considering five factors for outsourcing, including cost, quality, project, certification, and delivery performance in their study. qureshi et al., (2008) developed an interpretative structural modeling-based approach to define and classify key criteria and to examine their roles in 3pl evaluation. in the study, quality of service, size, and quality of fixed assets, management quality, computing capability, delivery performance, information sharing, and trust, operational performance, compliance, financial stability, geographic spread and range, long-term relationship, reputation, the optimal cost in operation and delivery, volatility and flexibility as 15 criteria were determined. liu and wang (2009) presented a three-step approach for the evaluation and selection of 3pl enterprises. in the first stage, the fuzzy delphi method was used to define important evaluation criteria. next, a fuzzy inference method has been applied to estimate non-eligible 3pl businesses. at the last stage, a fuzzy linear assignment approach has been developed for final selection. mickaitis et al. (2009) obtained from their study, on the outsourcing of smes in lithuania; it has been karamaşa et al./decis. mak. appl. manag. eng. 4 (1) (2021) 19-32 22 observed that outsourcing is widely used in smes in lithuania, and the main factors for these enterprises to prefer outsourcing are to reduce costs, increase efficiency and productivity, and increase quality. the short-term benefits of outsourcing have been identified as reducing the need for personnel, allowing better concentration on basic activities, and reducing costs. ji-fan ren et al. (2010) obtained from their study examining the determinants of the partnership quality of smes in china on outsourcing; it has been determined that perceived benefits of outsourcing, outsourcing competency management, the correct definition of outsourcing, and senior management's attitude towards outsourcing are significantly related to the quality of outsourcing. in light of the findings obtained from a study by o'regan and kling (2011) examining whether outsourcing is a competitive factor for smes operating in the production sector; it has been observed that small enterprises with low r&d investments tend to outsource. özbek and eren (2013) adopted the analytical network process approach for the selection of 3pl companies in their studies and considered quality, long-term relationships, company image, and operational performance as the basic criteria. govindan et al. (2016) used the dematel method in their examinations and determined that the most important criteria in the selection of 3pl companies are delivery performance, technology level, financial stability, human resources management, service quality, and customer satisfaction, respectively. sremac et al. (2018) applied with a total of 10 logistics providers for the transport of dangerous goods for chemical industry companies in their study. the importance of the eight criteria underlying the study in which the assessment was made, it was determined using the rough-swara method. korucuk (2018) evaluated the criteria to be used in the selection of third-party logistics (3pl) in cold chain transportation companies in istanbul and made the selection of the best 3pl. as a result of the study, it has been determined that “business performance” is the most important main criterion in the selection of 3pl companies, and the “c” option is the best 3pl provider company. erdoğan and tokgöz (2020) investigated the role of formal and relational governance in the success of outsourcing in information technology (it) in the aviation industry. according to the results of the research contracts and relationship norms in the success of it outsourcing business aviation operating in turkey, it is effective individually. also, they stated that formal and relational governance mechanisms are complementary rather than substitutes for each other. when businesses want to work with a company that provides logistics services, it is not an easy process to decide which company to be in the market. there is more than one third party logistics company with different competencies and skills in the market. outsourcing for logistics operations should be determined by experts. the increasing demand for outsourcing also increases the service alternatives offered. logistics service provider companies, which increase the level of competition, enable access with high quality and the most affordable cost level. the factors affecting the choice of outsourcing of enterprises have been listed as follows: weighting the factors affecting logistics outsourcing and selecting the most ideal company 23 table 1. 3pl main selection criteria criteria authors customer service quality menon et al. (1998), boyson et al. (1999), bottani and rizzi (2006), işıklar et al. (2007), jharkharia and shankar (2007), fu and liu (2007), qureshi et al. (2008), bhatti et al. (2010), chen and wu (2011), erkayman et al. (2012), li et al. (2012), govindan et al. (2012), bansal and kumar (2013), cirpin and kabadayi (2015), hwang et al. (2016), sremac et al. (2018) computing capability bottani and rizzi (2006), işıklar et al. (2007), bhatti et al. (2010), rajesh et al. (2011), erkayman et al. (2012), bansal and kumar (2013), hwang et al. (2016), sremac et al. (2018) operational performance chen and wu (2011), wong (2012), korucuk (2018) cost menon et al. (1998), boyson et al. (1999), bottani and rizzi (2006), işıklar et al. (2007), fu and liu (2007), jharkharia and shankar (2007), qureshi et al. (2008), vijayvargiya and dey (2010), rajesh et al. (2011), chen and wu (2011), govindan et al. (2012), erkayman et al. (2012), bansal and kumar (2013), cirpin and kabadayi (2015), hwang et al. (2016), sremac et al. (2018), korucuk(2018) supply chain capability bhatti et al. (2010), sremac et al. (2018) sustainability bansal and kumar (2013), cirpin and kabadayi (2015) risk management capability perçin (2009), rajesh et al. (2011), sremac et al. (2018), korucuk (2018) confidence petroni and braglia (2000), bottani and rizzi (2006), jharkharia and shankar (2007), qureshi etal. (2008), bansal and kumar (2013), sremac et al. (2018) geographical location suitability petroni and braglia (2000), qureshi et al. (2008), govindan et al. (2012), bansal and kumar (2013) history of performance petroni and braglia (2000), fu and liu (2007), qureshi et al. (2008), govindan et al. (2012) value added service vijayvargiya and dey (2010), bansal and kumar (2013) on time delivery rajesh et al. (2011), erkayman et al. (2012), govindan et al. (2012) flexibility petroni and braglia (2000), bottani and rizzi (2006), erkayman et al. (2012), govindan et al. (2012), korucuk (2018) karamaşa et al./decis. mak. appl. manag. eng. 4 (1) (2021) 19-32 24 in the literature review, it has been determined that there are limited studies on logistics outsourcing and choosing the most ideal company. the study differs from other studies in terms of the method used and the province applied. it is thought that it will contribute to the literature with this aspect. table 2. 3pl selection criteria main criteria sub-criteria cost transportation / storage and documentation cost, payment terms, discount offers customer service quality scope of services, flexibility and reliability, timeliness and ease of transaction / communication, customer satisfaction and cooperation, value added service computing capability edi presence and compatibility, computing network availability, data integrity and reliability, system stability operational performance delivery performance and reliability, relationship management, geolocation compliance, performance history, document accuracy supply chain capability trained logistics staff, infrastructure / equipment and logistics technology, efficiency capacity, risk management capability, reverse logistics function sustainability economic responsibility, social responsibility, environmental responsibility 3. methodology in this section firstly neutrosophic set (ns) is introduced then a single-valued neutrosophic set (svns) as a special case of ns is explained too. besides the steps of the neutrosophic ahp as newly developed multi-criteria, decision-making method are stated. 3.1. neutrosophic set smarandache (1998) introduced the concept of neutrosophic sets (ns) having with a degree of truth, indeterminacy, and falsity membership functions in which all of them are independent. let u be a universe of discourse and 𝑥 ∈ 𝑈. the neutrosophic set (ns) n can be expressed by a truth membership function 𝑇𝑁(𝑥), an indeterminacy membership function 𝐼𝑁(𝑥) and a falsity membership function 𝐹𝑁(𝑥), and is represented as 𝑁 = {< 𝑥:𝑇𝑁(𝑥),𝐼𝑁(𝑥),𝐹𝑁(𝑥) >,𝑥 ∈ 𝑈}. also the functions of 𝑇𝑁(𝑥), 𝐼𝑁(𝑥) and 𝐹𝑁(𝑥) are real standard or real nonstandard subsets of ]0 −,1+[ and can be presented as 𝑇,𝐼,𝐹:𝑈 → ]0−,1+[ . there is not any restriction on the sum of the functions of 𝑇𝑁(𝑥), 𝐼𝑁(𝑥) and 𝐹𝑁(𝑥) so 0 − ≤ 𝑠𝑢𝑝𝑇𝑁(𝑥) + 𝑠𝑢𝑝𝐼𝑁(𝑥) + 𝑠𝑢𝑝𝐹𝑁(𝑥) ≤ 3 +. the complement of an ns n is represented by 𝑁𝐶 and described as below: 𝑇𝑁 𝐶(𝑥) = 1+ ⊖ 𝑇𝑁(𝑥) (1) 𝐼𝑁 𝐶(𝑥) = 1+ ⊖ 𝐼𝑁(𝑥) (2) 𝐹𝑁 𝐶(𝑥) = 1+ ⊖ 𝐹𝑁(𝑥) for all𝑥 ∈ 𝑈 (3) weighting the factors affecting logistics outsourcing and selecting the most ideal company 25 a ns, n is contained in other ns p in other words , 𝑁 ⊆ 𝑃 if and only if 𝑖𝑛𝑓𝑇𝑁(𝑥) ≤ 𝑖𝑛𝑓𝑇𝑃(𝑥), 𝑠𝑢𝑝𝑇𝑁(𝑥) ≤ 𝑠𝑢𝑝𝑇𝑃(𝑥), 𝑖𝑛𝑓𝐼𝑁(𝑥) ≥ 𝑖𝑛𝑓𝐼𝑃(𝑥), 𝑠𝑢𝑝𝐼𝑁(𝑥) ≥ 𝑠𝑢𝑝𝐼𝑃(𝑥), ), 𝑖𝑛𝑓𝐹𝑁(𝑥) ≥ 𝑖𝑛𝑓𝐹𝑃(𝑥), 𝑠𝑢𝑝𝐹𝑁(𝑥) ≥ 𝑠𝑢𝑝𝐹𝑃(𝑥), for all 𝑥 ∈ 𝑈 (biswas et al., 2016). 3.2. single valued neutrosophic sets (svns) single valued neutrosophic set (svns) as a special case of ns for considering indeterminate, inconsistent, and incomplete information was developed by wang, smarandache, zhang, and sunderraman (2010). they acquire the interval [0,1]instead of/for solving the real-world problems. let u be a universe of discourse and 𝑥 ∈ 𝑈. a single-valued neutrosophic set b in u is described by a truth membership function𝑇𝐵(𝑥), an indeterminacy membership function 𝐼𝐵(𝑥)and a falsity membership function 𝐹𝐵(𝑥). when u is continuous an svns, b is depicted as𝐵 = ∫ <𝑇𝐵(𝑥),𝐼𝐵(𝑥),𝐹𝐵(𝑥)> 𝑥 :𝑥 ∈ 𝑈 𝑥 . when u is discrete a svns b can be represented𝐵 = ∑ <𝑇𝐵(𝑥𝑖),𝐼𝐵(𝑥𝑖),𝐹𝐵(𝑥𝑖) 𝑥𝑖 :𝑥𝑖 ∈ 𝑈 𝑛 𝑖=1 as (mondal, pramanik, & smarandache, 2016) the functions of 𝑇𝐵(𝑥),𝐼𝐵(𝑥) and 𝐹𝐵(𝑥) are real standard subsets of [0,1] that is 𝑇𝐵(𝑥):𝑈 → [0,1], 𝐼𝐵(𝑥):𝑈 → [0,1]and 𝐹𝐵(𝑥):𝑈 → [0,1]. also the sum of 𝑇𝐵(𝑥),𝐼𝐵(𝑥) and 𝐹𝐵(𝑥) are in [0,3] that0 ≤ 𝑇𝐵(𝑥) + 𝐼𝐵(𝑥) + 𝐹𝐵(𝑥) ≤ 3 (biswas et al., 2016). let a single-valued neutrosophic triangular number �̃� = 〈(𝑏1,𝑏2,𝑏3);𝛼�̃�,𝜃�̃�,𝛽�̃�〉 is a special neutrosophic set on r. additionally 𝛼�̃�,𝜃�̃�,𝛽�̃� ∈ [0,1] and 𝑏1,𝑏2,𝑏3 ∈ 𝑅 where 𝑏1 ≤ 𝑏2 ≤ 𝑏3. truth, indeterminacy, and falsity membership functions of this number can be computed as below (abdel-basset et al., 2017). t�̃�(𝒙) = { α�̃� ( x−𝑏1 𝑏2−𝑏1 ) α�̃� 𝛼�̃� ( 𝑏3−𝑥 𝑏3−𝑏2 ) 0 (𝑏1 ≤ 𝑥 ≤ 𝑏2) (x = 𝑏2) (𝑏2 < 𝑥 ≤ 𝑏3) otherwise (4) i�̃�(𝒙) = { ( 𝑏2−x+𝜃�̃�(x−𝑏1) 𝑏2−𝑏1 ) θ�̃� ( 𝑥−𝑏2+𝜃�̃�(𝑏3−𝑥) 𝑏3−𝑏2 ) 1 (𝑏1 ≤ 𝑥 ≤ 𝑏2) (x = 𝑏2) (𝑏2 < 𝑥 ≤ 𝑏3) otherwise (5) f�̃�(𝒙) = { ( 𝑏2−x+𝛽�̃�(x−𝑏1) 𝑏2−𝑏1 ) β�̃� ( 𝑥−𝑏2+𝛽�̃�(𝑏3−𝑥) 𝑏3−𝑏2 ) 1 (𝑏1 ≤ 𝑥 ≤ 𝑏2) (x = 𝑏2) (𝑏2 < 𝑥 ≤ 𝑏3) otherwise (6) according to the eqs. (4)-(6) 𝛼�̃�,𝜃�̃�, 𝑎𝑛𝑑 𝛽�̃� denote maximum truth membership, minimum indeterminacy membership, and minimum falsity membership degrees respectively. suppose �̃� = 〈(𝑏1,𝑏2,𝑏3);𝛼�̃�,𝜃�̃�,𝛽�̃�〉 and �̃� = 〈(𝑐1, 𝑐2, 𝑐3);𝛼𝑐̃,𝜃𝑐̃,𝛽𝑐̃〉 as two singlevalued triangular neutrosophic numbers and 𝜆 ≠ 0 as a real number. considering the addition of the abovementioned condition of two single-valued triangular neutrosophic numbers are denoted as follows (abdel-basset et al., 2017). �̃� + �̃� = 〈(𝑏1 + 𝑐1,𝑏2 + 𝑐2,𝑏3 + 𝑐3);𝛼�̃� ∧ 𝛼𝑐̃,𝜃�̃� ∨ 𝜃𝑐̃,𝛽�̃� ∨ 𝛽𝑐̃〉 (7) subtraction of two single-valued triangular neutrosophic numbers are defined as eq.(8): karamaşa et al./decis. mak. appl. manag. eng. 4 (1) (2021) 19-32 26 �̃� − �̃� = 〈(𝑏1 − 𝑐3,𝑏2 − 𝑐2,𝑏3 − 𝑐1);𝛼�̃� ∧ 𝛼𝑐̃,𝜃�̃� ∨ 𝜃𝑐̃,𝛽�̃� ∨ 𝛽𝑐̃〉 (8) the inverse of a single-valued triangular neutrosophic number (�̃� ≠ 0)can be denoted as below: �̃�−1 = 〈( 1 𝑏3 , 1 𝑏2 , 1 𝑏1 );𝛼�̃�,𝜃�̃�,𝛽�̃�〉 (9) multiplication of a single-valued triangular neutrosophic number by a constant value is represented as follows: 𝜆�̃� = { 〈(𝜆𝑏1,𝜆𝑏2,𝜆𝑏3);𝛼�̃�,𝜃�̃�,𝛽�̃�〉 𝑖𝑓 (𝜆 > 0) 〈(𝜆𝑏3,𝜆𝑏2,𝜆𝑏1);𝛼�̃�,𝜃�̃�,𝛽�̃�〉 𝑖𝑓 (𝜆 < 0) (10) division of a single-valued triangular neutrosophic number by a constant value is denoted as eq.(11): �̃� 𝜆 = { 〈( 𝑏1 𝜆 , 𝑏2 𝜆 , 𝑏3 𝜆 );𝛼�̃�,𝜃�̃�,𝛽�̃�〉 𝑖𝑓 (𝜆 > 0) 〈( 𝑏3 𝜆 , 𝑏2 𝜆 , 𝑏1 𝜆 );𝛼�̃�,𝜃�̃�,𝛽�̃�〉 𝑖𝑓 (𝜆 < 0) (11) multiplication of two single-valued triangular neutrosophic numbers can be seen as follows: �̃��̃� = { 〈(𝑏1𝑐1,𝑏2𝑐2,𝑏3𝑐3);𝛼�̃� ∧ 𝛼𝑐̃,𝜃�̃� ∨ 𝜃𝑐̃,𝛽�̃� ∨ 𝛽𝑐̃〉 𝑖𝑓 (𝑏3 > 0,𝑐3 > 0) 〈(𝑏1𝑐3,𝑏2𝑐2,𝑏3𝑐1);𝛼�̃� ∧ 𝛼𝑐̃,𝜃�̃� ∨ 𝜃𝑐̃,𝛽�̃� ∨ 𝛽𝑐̃〉 𝑖𝑓 (𝑏3 < 0,𝑐3 > 0) 〈(𝑏3𝑐3,𝑏2𝑐2,𝑏1𝑐1);𝛼�̃� ∧ 𝛼𝑐̃,𝜃�̃� ∨ 𝜃𝑐̃,𝛽�̃� ∨ 𝛽𝑐̃〉 𝑖𝑓 (𝑏3 < 0,𝑐3 < 0) (12) division of two single-valued triangular neutrosophic numbers can be denoted as eq.(13): �̃� 𝑐̃ = { 〈( 𝑏1 𝑐3 , 𝑏2 𝑐2 , 𝑏3 𝑐1 );𝛼�̃� ∧ 𝛼𝑐̃,𝜃�̃� ∨ 𝜃𝑐̃,𝛽�̃� ∨ 𝛽𝑐̃〉 𝑖𝑓 (𝑏3 > 0,𝑐3 > 0) 〈( 𝑏3 𝑐3 , 𝑏2 𝑐2 , 𝑏1 𝑐1 );𝛼�̃� ∧ 𝛼𝑐̃,𝜃�̃� ∨ 𝜃𝑐̃,𝛽�̃� ∨ 𝛽𝑐̃〉 𝑖𝑓 (𝑏3 < 0,𝑐3 > 0) 〈( 𝑏3 𝑐1 , 𝑏2 𝑐2 , 𝑏1 𝑐3 );𝛼�̃� ∧ 𝛼𝑐̃,𝜃�̃� ∨ 𝜃𝑐̃,𝛽�̃� ∨ 𝛽𝑐̃〉 𝑖𝑓 (𝑏3 < 0,𝑐3 < 0) (13) score function (𝑠𝑏) for a single-valued triangular neutrosophic number 𝑏 = (𝑏1,𝑏2,𝑏3) can be found as below (stanujkic et al., 2017). 𝑠𝑏 = (1 + 𝑏1 − 2 ∗ 𝑏2 − 𝑏3)/2 (14) where 𝑠𝑏 ∈ [−1,1]. 3.3. neutrosophic ahp steps of neutrosophic ahp are depicted as below (abdel-basset et al., 2017): step 1: decision problem is arranged in terms of hierarchical viewpoint composed of goal, criteria, sub-criteria, and alternatives respectively. step 2: pairwise comparisons are constructed to form a neutrosophic evaluation matrix consisting of triangular neutrosophic numbers showing the experts’ views. neutrosophic pairwise evaluation matrix (�̃�)can be written as follows: �̃� = [ 1̃ �̃�12 ⋯ �̃�1𝑛 ⋮ ⋮ ⋮ ⋮ �̃�𝑛1 �̃�𝑛2 ⋯ 1̃ ] (15) according to eq.(15) �̃�𝑗𝑖 = �̃�𝑖𝑗 −1 is valid. step 3: neutrosophic pairwise evaluation matrix is formed by applying a transformed scale for neutrosophic environment seen as table 3: weighting the factors affecting logistics outsourcing and selecting the most ideal company 27 table 3. ahp transformed scale related to neutrosophic triangular numbers (abdel-basset et al., 2018) value explanation neutrosophic triangular scale 1 equally influential 1̃ = 〈(1,1,1);0.5,0.5,0.5〉 3 slightly influential 3̃ = 〈(2,3,4);0.3,0.75,0.7〉 5 strongly influential 5̃ = 〈(4,5,6);0.8,0.15,0.2〉 7 very strongly influential 7̃ = 〈(6,7,8);0.9,0.1,0.1〉 9 absolutely influential 9̃ = 〈(9,9,9);1,0,0〉 2 4 6 8 intermediate values between two close scales 2̃ = 〈(1,2,3);0.4,0.65,0.6〉 4̃ = 〈(3,4,5);0.6,0.35,0.4〉 6̃ = 〈(5,6,7);0.7,0.25,0.3〉 8̃ = 〈(7,8,9);0.85,0.1,0.15〉 step 4: neutrosophic pairwise evaluation matrix is transformed into a deterministic pairwise evaluation matrix for calculating the weights of criterion as below: let �̃�𝑖𝑗 = 〈(𝑑1,𝑒1,𝑓1),𝛼�̃�,𝜃�̃�,𝛽�̃�〉 be a single-valued neutrosophic number, then the score and accuracy degrees for �̃�𝑖𝑗 can be calculated computed as below: 𝑆(�̃�𝑖𝑗) = 1 16 [𝑑1 + 𝑒1 + 𝑓1]𝑥(2 + 𝛼�̃� − 𝜃�̃� − 𝛽�̃�) (16) 𝐴(�̃�𝑖𝑗) = 1 16 [𝑑1 + 𝑒1 + 𝑓1]𝑥(2 + 𝛼�̃� − 𝜃�̃� + 𝛽�̃�) (17) score and accuracy degrees for �̃�𝑖𝑗 are obtained according to the following equations. 𝑆(�̃�𝑗𝑖) = 1/𝑆(�̃�𝑖𝑗) (18) 𝐴(�̃�𝑗𝑖) = 1/𝐴(�̃�𝑖𝑗) (19) the deterministic pairwise evaluation matrix is formed with compensation by the score value of each triangular neutrosophic number related to the neutrosophic pairwise evaluation matrix. the deterministic matrix can be written as below: 𝐷 = [ 1 𝑑12 ⋯ 𝑑1𝑛 ⋮ ⋮ ⋮ ⋮ 𝑑𝑛1 𝑑𝑛2 ⋯ 1 ] (20) ranking of priorities as eigenvector x is obtained according to the following steps: a)firstly column entries are normalized by dividing each entry by the sum of column b)then row averages are summed. step 5: consistency index (ci) and consistency ratio (cr) values are calculated for measuring the inconsistency for decision-makers’ judgments in the entire pairwise evaluation matrix. if cr is greater than 0.1, the process should be repeated because of unreliable judgments. ci is calculated as below: a)each value in the first column of the pairwise evaluation matrix is multiplied by the priority of the first criterion and this process is repeated for all columns. values are summed across the rows to obtain the weighted sum vector. b) the elements of the weighted sum vector are divided by corresponding the priority for each criterion. then the average of values are obtained and showed by 𝜆𝑚𝑎𝑥. c) the value of ci is computed as eq. (21): 𝐶𝐼 = 𝜆𝑚𝑎𝑥−𝑛 𝑛−1 (21) karamaşa et al./decis. mak. appl. manag. eng. 4 (1) (2021) 19-32 28 according to eq.(21), the number of elements being compared is denoted by n. after the value of ci is calculated, cr can be acquired as below: 𝐶𝑅 = 𝐶𝐼 𝑅𝐼 (22) where ri represents the consistency index for randomly generated pairwise evaluation matrix and shown as table 4. table 4. ri table considered for obtaining cr value (abdel-basset et al., 2017) order of random matrix (n) 0 2 3 4 5 6 7 8 9 10 related ri value 0 0 0.58 0.9 1.12 1.24 1.32 1.4 1.45 1.49 step 6: overall priority values for each alternative are calculated and ranking is executed. 4. case study and analysis in this study, six criteria (cost, customer service quality, data processing ability, operational performance, supply chain ability, and sustainability) considered for factors affecting outsourcing related to 3pl are weighted via neutrosophic ahp firstly. for this purpose evaluations of five decision-makers in 3pl are considered. neutrosophic evaluation matrix in terms of factors affecting outsourcing related to 3pl is constructed through decision-makers’ linguistic judgments which are seen as table 1. neutrosophic evaluation matrix is transformed into a crisp one by using equation (16) and taking the geometric means of 5 decision-makers’ views. the crisp evaluation matrix for criteria is shown in table 5. table 5. the crisp evaluation matrix for criteria related to outsourcing criteria cost custome r service quality data processing ability operational performance supply chain ability sustainability cost 1 1.995 0.988 1.337 1.337 0.976 customer service quality 0.501 1 1.226 0.895 0.654 0.895 data processing ability 1.012 0.815 1 1.226 0.895 0.895 operational performance 0.747 1.116 0.815 1 1.226 1.337 supply chain ability 0.747 1.528 1.116 0.815 1 1.995 sustainability 1.023 1.116 1.116 0.747 0.501 1 the normalized evaluation matrix for criteria is constructed as table 6. table 6. the normalized evaluation matrix for criteria related to outsourcing weighting the factors affecting logistics outsourcing and selecting the most ideal company 29 criteria cost customer service quality data processing ability operational performance supply chain ability sustainability cost 0.596 0.203 0.116 0.176 0.169 0.113 customer service quality 0.079 0.101 0.144 0.118 0.082 0.104 data processing ability 0.161 0.083 0.117 0.161 0.113 0.104 operational performance 0.119 0.113 0.096 0.131 0.155 0.155 supply chain ability 0.119 0.155 0.131 0.107 0.126 0.232 sustainability 0.163 0.113 0.131 0.098 0.063 0.116 finally, the priorities for criteria as the eigenvector x are obtained by taking the overall row averages and presented as follows: table 7. priorities for criteria related to outsourcing criteria priorities cost 0.1528 customer service quality 0.1030 data processing ability 0.1139 operational performance 0.1348 supply chain ability 0.1354 sustainability 0.1244 according to table 7, while cost was found as the most important criterion having a value of 0.1529, customer service quality was obtained as the least important one having a value of 0.103. then the consistency of decision-makers’ judgments is checked by computing ci and cr values. ci value is found as 0.018 and by using equation (22) cr value is acquired as 0.012. decision-makers’ evaluations are consistent because of having cr value smaller than 0.1. 5. conclusion in this study factors affecting outsourcing related 3pl determined by extensive literature review process are ranked by using neutrosophic ahp. single valued neutrosophic sets are preferred compared to crisp, fuzzy, interval-valued, and intuitionistic sets due to efficiency, flexibility, and easiness for explaining decisionmakers’ indeterminate judgments. furthermore ranking of factors affecting outsourcing related to 3pl as a complex real-world decision making problem can be efficiently solved under neutrosophic sets based environment. for further researches factors affecting outsourcing related to 3pl can be expanded and results can be compared with different multi-criteria decision-making methods. also, various hybrid techniques can be proposed and applied for real-world complex decision-making problems. karamaşa et al./decis. mak. appl. manag. eng. 4 (1) (2021) 19-32 30 author contributions: each author has participated and contributed sufficiently to take public responsibility for appropriate portions of the content. funding: this research received no external funding. conflicts of interest: the authors declare no conflicts of interest. references abdel-basset, m., mohamed, m., & smarandache, f. (2018). an extension of neutrosophic ahp swot analysis for strategic planning and decision making. symmetry, 1-18. abdel-basset, m., mohamed, m., zhou, y., & hezam, i. (2017). multi-criteria group decision making based on neutrosophic analytic hierarchy process. journal of intelligent & fuzzy systems, 4055-4066. arbore, a. & ordanini, a. (2006). broadband divide among smes: the role of size, location and outsourcing strategies, international small business journal: researching entrepreneurship, 24(1), 83-99. bansal, a. & kumar, p. (2013). 3pl selection using hybrid model of ahp-promethee, international journal of service and operations management, 14(3), 373-397. bhatnagar, r., sohal, a.s. & millen, r. (1999). third-party logistics services: a singapore perspective. international journal of physical distribution ve logistics management ,29(9), 569-587. bhatti, r. s., kumar, p. & kumar, d. (2010). analytical modeling of third party service provider selection in lead logistics provider environments, journal of modelling in management, 5(3), 275-286. biswas, p., pramanik, s., & giri, b. (2016). some distance measures of single valued neutrosophic hesitant fuzzy sets and their applications to multiple attribute decision making. in f. smarandache, & s. pramanik (eds.) in new trends in neutrosophic theory and applications (pp. 27-34). brussels: pons publishing house. bottani, e. & rizzi, a. (2006). a fuzzy topsis methodology to support outsourcing of logistics services. supply chain management: an international journal, 11(4), 294308. boyson, s., corsi, t., dresner, m. & rabinovich, e. (1999). managing effective third party logistics relationships: what does it take? journal of business logistics, 20(1), 73-100. chen, k. y. & wu, w.t.(2011). applying analytic network process in logistics service provider selection – a case study of the industry investing in southeast asia, international journal of electronic business management, 9(1), 24-36. cirpin, b. k. & kabadayi, n. (2015), analytic hierarchy process in third-party logistics provider selection criteria evaluation: a case study in it distributor company, international journal of multidisciplinary sciences and engineering, 6(3), 1-6. weighting the factors affecting logistics outsourcing and selecting the most ideal company 31 dapiran, p., lieb, r., millen, r. & sohal, a. (1996). third party logistics services usage by large australian firms. international journal of physical distribution and logistics management, 26(10), 36-45. erdoğan, d. & tokgöz, n. (2020). bilgi teknolojileri dış kaynak kullanımı başarısında biçimsel ve ilişkisel yönetişimin rolü: havacılık sektöründe bir araştırma. i̇zmir i̇ktisat dergisi, 35(2), 221-239. erkayman, b., gundogar, e. & yilmaz a. (2012), an integrated fuzzy approach for strategic alliance partner selection in third-party logistics, the scientific world journal, 2012, 1-6. fu, y. & liu, h. (2007). information systems outsourching vendor selection based on analiytic hierarchy process, in wireless communications, networking and mobile computing international conference, 6250-6253. govindan, k, khodaverdi, r. & vafadarnikjoo, a, (2016), a grey dematel approachtodevelop third-partylogistics provider selection criteria, industrial management & data systems, 116(4), 690-722. govindan, k, palaniappan m, zhu, q. & kannan, d. (2012). analysis of third party reverse logistics provider using interpretive structural modelling, international journal of production economics, 140(1), 204-211. hwang, b. n., chen, t. t. & lin, j. t. (2016). 3pl selection criteria in integrated circuit manufacturing industry in taiwan, supply chain management: an international journal, 21(1), 103-124. işıklar, g., alptekin, e. & büyüközkan, g. (2007). application of a hybrid intelligent decision support model in logistics outsourcing. computers & operations research, 34(12), 3701-3714. jharkharia, s. & shankar, r. (2007). selection of logistics service provider: an analytic network process (anp) approach. omega, 35(3), 274-289. ji-fan ren, s., ngai, e. w.t. & cho, v. (2010). examining the determinants of outsourcing partnership quality in chinese small-and-medium-sized enterprises, international journal of production research, 48(2), 453-475. korucuk, s. (2018), soğuk zincir taşımacılığı yapan i̇şletmelerde 3pl firma seçimi: i̇stanbul örneği, iğdır üniversitesi sosyal bilimler dergisi, 16, 341-365. li f, li l, jin, c, wang r, wang h. & yang l. (2012). a 3pl supplier selection model based on fuzzy sets, computers & operations research, 39(8), 1879-1884. liu, h. t. & wang, w.k. (2009). an integrated fuzzy approach for provider evulation and selection in third-party logistics, expert systems with applications, 36(3), 43874398. menon, m. k., mcginnis, m. a., & ackerman, k. b. (1998). selection criteria for providers of third-party logistics services: an exploratory study. journal of business logistics, 19(1), 121-137. mickaitis, a., bartkus, e. v. & zascizinskiene, g. (2009). empirical research on outsourcing in lithuanian small business segment, engineering economics, 5, 91 101. karamaşa et al./decis. mak. appl. manag. eng. 4 (1) (2021) 19-32 32 mondal, k., pramanik, s., & smarandache, f. (2016). several trigonometric hamming similarity measures of rough neutrosophic sets and their applications in decision making. in f. smarandache, & s. pramanik (eds.) new trends in neutrosophic theory and applications (pp. 93-103). brussels: pons publishing house. o’regan, n. & kling, g. (2011). technology outsourcing in manufacturing smes: another competitive resource? r&d management, 41(1),92-105. özbek, a., & eren,t, (2013), analitik ağ süreci yaklaşımıyla üçünü parti lojistik (3pl) firma seçimi”, atatürk üniversitesi i̇ktisadi ve i̇dari bilimler dergisi, 27(1), 95-113. perçin, s. (2009). evaluation of third-party logistics (3pl) providers by using a twophase ahp and topsis methodology, benchmarking: an international journal, 16(5), 588-604. petroni, a. & braglia, m. (2000). vendor selection using principal component analysis. journal of supply chain management, 36(2), 63-69. prater, e. & ghosh, s. (2005). current operational practices of u.s. small and mediumsized enterprises in europe, journal of small business management, 43(2), 155-169. qureshi, m. n., kumar, d. & kumar, p. (2008). an integrated model to identify and classify the key criteria and their role in the assessment of 3pl services providers. asia pacific journal of marketing and logistics, 20(2), 227-249. rajesh, r., pugazhendhi, s., ganesh, k. & muralidharan c. (2011). aqua: analytical model for evaluation and selection of third-party logistics service provider in supply chain, international journal of services and operations management, 8(1), 27-45. smarandache, f. (1998). a unifying field in logics, neutrosophy: neutrosophic probability, set and logic. rehoboth: american research press. sremac, s., stević, ž., pamučar, d., arsić, m. & matić, b. (2018). evaluation of a third party logistics (3pl) provider using a rough swara–waspas model based on a new rough dombi aggregator. symmetry, 10(8), 1-25. vijayvargiya, a. & dey, a. k. (2010). an analytical approach for selection of a logistics provider, management decisions, 48(3), 403-418. wang, h., smarandache, f., zhang, y., & sunderraman, r. (2010). single valued neutrosophic sets. multispace and multistructure,4, 410-413. wong, j. t. (2012). dss for 3pl provider selection in global supply chain: combining the multi-objective optimisation model with experts’ opinions, journal of intelligent manufacturing, 23(3), 599-614. © 2018 by the authors. submitted for possible open access publication under the terms and conditions of the creative commons attribution (cc by) license (http://creativecommons.org/licenses/by/4.0/). plane thermoelastic waves in infinite half-space caused decision making: applications in management and engineering vol. 4, issue 2, 2021, pp. 241-256. issn: 2560-6018 eissn: 2620-0104 doi: https://doi.org/10.31181/dmame210402241n * corresponding author. e-mail addresses: gnegiji@gmail.com (g. negi), anuj4march@gmail.com (a. kumar), pant.sangeet@gmail.com (s. pant), mangeyram@gmail.com (m. ram) optimization of complex system reliability using hybrid grey wolf optimizer ganga negi1, anuj kumar2, sangeeta pant2*, mangey ram3,4 1 department of mathematics, graphic era deemed to be university, dehradun, india 2 department of mathematics, university of petroleum and energy studies, dehradun, india 3 department of mathematics, computer science and engineering, graphic era, dehradun, india 4 institute of advanced manufacturing technologies, peter the great st., petersburg polytechnic university, saint petersburg, russia received: 30 march 2021; accepted: 20 july 2021; available online: 18 august 2021. original research paper abstract: reliability allocation to increase the total reliability has become a successful way to increase the efficiency of the complex industrial system designs. a lot of research in the past have tackled this problem to a great extent. this is evident from the different techniques developed so far to achieve the target. metaheuristics like simulated annealing, tabu search (ts), particle swarm optimization (pso), cuckoo search optimization (cs), genetic algorithm (ga), grey wolf optimization technique (gwo) etc. have been used in the recent years. this paper proposes a framework for implementing hybrid pso-gwo algorithm (hpsogwo) for solving reliability allocation and optimization problems of complex bridge system and life support system in space capsule. the supremacy/competitiveness of the proposed framework are demonstrated from the numerical experiments. comparison of the results obtained by hpsogwo with previously used algorithms named pso and gwo shows that in one problem named the complex bridge system, the hpsogwo uses lesser number of function evaluations as compared to pso and gwo. hence, the overall solutions obtained by hpsogwo are not only comparable to the previously obtained results by some of the other well-known optimization methods, but also better than that. keywords: cost function, metaheuristics, reliability allocation problems, particle swarm optimization (pso), grey wolf optimizer (gwo), hybrid psogwo algorithm (hpsogwo). mailto:gnegiji@gmail.com mailto:anuj4march@gmail.com mailto:pant.sangeet@gmail.com mailto:mangeyram@gmail.com negi et al./decis. mak. appl. manag. eng. 4 (2) (2021) 241-256 242 1. introduction the present-day real-world problems of engineering have reached to an advanced level that have motivated the researchers to find ways to increase the efficiency of complex systems. reliability being the main criteria for this task and the attention of the researchers is diverted towards allocating reliability to the complex systems and the components. nowadays there has been lot of research in the field of reliability optimization considering its wide applications in real life and industry (atiqullah & rao, 1993; pham et al., 1995; eiben & schippers, 1998; kishor et al., 2009; jayabarathi et al., 2016;). such improvisations increase the efficiency and give better results for stochastic nonlinear optimization problems (ramírez-rosado & bernal-agustín, 2001). the constraints of weight, budget, volume, can be appropriately set, in reliability allocation problem (rap) to optimize reliability of the system (kishor et al., 2007; pant et al., 2015). due to the immense applications, rap problems have attracted the attention of many researchers to explore this technology (mohan & shanker, 1987; majety et al., 1999; pant & singh, 2011; kumar et al., 2016). basically, reliability optimization problems can be classified into three categories depending upon the decision variables involved. these are (i) reliability allocation (li et al., 2008; mirjalili et al., 2016; pant et al., 2017; kumar et al., 2019a, 2019b;) (ii) redundancy allocation (atiqullah & rao, 1993; misra & sharma, 1991a, 1991b; yang & deb, 2009) and (iii) reliabilityredundancy allocation (sakawa, 1978; coelho, 2009; deep & deepti, 2009). going by the concept of mathematical programming reliability allocation is a continuous nonlinear programming problem (nlp). redundancy allocation is a pure integer nonlinear programming problem (inlp) for nonlinear polynomial hard problems. reliability-redundancy allocation is covered under mixed integer nonlinear programming problem (minlp) for solving problems of nonconvex nature and combinatorial search space. last few decades have witnessed much research in the field of reliability allocation problem (rap) and reliability optimization by researchers to solve single objective and multiple objective optimization problem. basically, the solutions techniques used so far to solve rap and optimization problems are approximation, exact, heuristic and metaheuristic methods. among these are exact solution techniques for rap like the cutting plane algorithm was proposed by majety et al. (1999) with discrete-cost reliability data for components and other such techniques by hikita et al. (1986, 1992). random search algorithm for rap presented by mohan & shankar (1987) for complex system reliability optimization. three levels decomposition approach the khun tucker multiplier method for rap was given by salazar et al. (2006). among the metaheuristic techniques for rap ant colony technique applied by shelokar et al. (2002); nsga 2 by kishore et al. (2007, 2009); pso by pant et al. (2011); csa by kumar et al. (2016). these optimization techniques yield solutions for problems of convex nature and monotonicity. 2. literature review in order to solve complex reliability allocations problems and reliability redundancy allocation problems which are nonlinear optimization problems of nonconvex nature and combinatorial search spaces more advanced algorithms called the metaheuristics have been formulated. these require lot of computational effort to find optimal solutions. as proposed by wolpert & macready (1997) that one type of optimization algorithm is not enough for all optimization problems. so, some researchers are constantly working on developing different types of nature inspired optimization of complex system reliability using hybrid grey wolf optimizer 243 meta-heuristics technique. some of them recently developed are evolutionary algorithm (ea) (ramírez-rosado & bernal-agustín, 2001), ant colony optimization (aco), (zha et al., 2007; dorigo & gambardella, 1997) particle swarm optimization algorithm (pso) (eberhart & kennedy, 1995; kennedy & eberhart, 1997; hu & eberhart, 2002, pant & singh, 2011) grey wolf optimization technique (gwo) (mirjalili et al. 2014; fouad et al., 2015; jayabarathi et al., 2016; mosavi et al., 2016; kumar et al., 2017; kumar et al., 2019a, 2019b; pant et al., 2019;), flower pollination algorithm (pant et al., 2017) and cuckoo search algorithm (csa) (yang & deb, 2009). the detailed reviews of reliability optimization especially gwo, pso optimization techniques are given by kuo and prasad (2000); negi et al. (2020); padhye et al. (2009); uniyal et. al. (2020). these previous researches have led to the development of some of the recent researches in the field of metaheuristic algorithms and hybrid metaheuristic algorithms and their applications. hassan & rashid (2021) proposed a new evolutionary clustering algorithm (eca) based on social class ranking and meta-heuristic algorithms for stochastically analysing heterogeneous and multifeatured datasets. rahman & rashid (2020) presented the idea of learner performance-based behavior algorithm lpb inspired from the process of accepting graduated learners from high school in different departments at university and has a greater ability to deal with the large optimization problems. a collaborative working approach to path finding was introduced by shamsaldin et al. (2019) in the form of donkey and smuggler optimization algorithm to solve different problems such as tsp, packet routing, and ambulance routing. abdullah & ahmed (2019) proposed fitness dependent optimizer inspired by the bee swarming reproductive process which uses the problem fitness function value to produce weights for guiding during the exploration and exploitation phases. some of the modified and hybrid algorithms have also been in the recent years to solve many real-world engineering problems. a new k-means grey wolf algorithm was developed by mohammed et al. (2021) to enhance the limitations of the wolves’ searching process of attacking gray wolves. a novel hybrid woa-gwo presented by mohammed & rashid (2020) by embedding the hunting mechanism of gwo into the woa exploitation phase with the enhanced exploration for global numerical optimization and to solve the pressure vessel design problem. mohammed et al. (2019) introduced a systematic and meta-analysis survey of whale optimization algorithm modifying and hybridizing woa algorithm with bat algorithm in order to avoid local stagnation as well as increase the rate of convergence to achieve the global optimum solution. ibrahim et al. (2020) presented a hybrid meta-heuristic algorithm of shuffled frog leaping algorithm and genetic algorithm (sfga), an energy efficient service composition mechanism consuming minimum cost, response time and energy in a mobile cloud environment as compared to other algorithms. muhammed et al. (2020) proposed an improved fitness-dependent optimizer algorithm ifdoa by first doing the randomization and then minimization of the weight fitness values using it in aperiodic antenna array designs. to forecast students’ outcomes by improving the faculty and students’ learning experiences rashid et al. (2019) presented a hybrid system a multi hidden recurrent neural network with a modified grey wolf optimizer. mukherjee et al. (2021) presented the idea of a multi-objective antlion optimizer for the ring tree problem with secondary sub-depots (mortpssd), to overcome the problems of telecommunication and logistics networks by minimizing the circuits’ total routing cost. in addition to the above optimizer for secondary depots mukherjee et al. (2021) introduced a modified discrete antlion optimizer for the ring star problem https://www.sciencedirect.com/science/article/pii/s1110866520301419#! javascript:; javascript:; https://ieeexplore.ieee.org/author/37086811234 https://ieeexplore.ieee.org/author/37086812554 https://ieeexplore.ieee.org/author/37086812554 https://ieeexplore.ieee.org/author/37086812554 https://ieeexplore.ieee.org/author/37086812554 https://ieeexplore.ieee.org/author/37086812554 https://ieeexplore.ieee.org/author/37086812554 https://www.emerald.com/insight/search?q=hardi%20m.%20mohammed https://ieeexplore.ieee.org/author/37086812554 https://link.springer.com/article/10.1007%2fs00521-020-04823-9#auth-hardi-mohammed https://link.springer.com/article/10.1007%2fs00521-020-04823-9#auth-tarik-rashid https://ieeexplore.ieee.org/author/37086812554 https://ieeexplore.ieee.org/author/37086812554 https://ieeexplore.ieee.org/author/37086812554 https://ieeexplore.ieee.org/author/37087408603 https://link.springer.com/article/10.1007/s12351-021-00623-8#auth-anupam-mukherjee https://link.springer.com/article/10.1007/s12351-021-00623-8#auth-anupam-mukherjee negi et al./decis. mak. appl. manag. eng. 4 (2) (2021) 241-256 244 (rspssd) which overcomes the challenge of minimizing the cost by selecting the suitable primary and secondary subdepots. in this paper, we present use of hybrid optimization technique called hybrid psogwo for reliability allocation problems. the idea is to apply the hpsogwo algorithm to minimize the cost of the complex bridge system and life support system in a space capsule resulting in better performance in terms of number of cost function evaluations and number of search agents used in pso and gwo algorithms individually. both pso and gwo are population-based swarm intelligence (si) techniques. both involve less and only suitable parameters, which have easy application and execution together with optimum convergence to the global solution. that’s why it yields better results than other metaheuristics. section 2 consists of detailed explanation of the particle swarm optimization technique. section 3 involves description of the grey wolf optimization and section 4 gives an overview of hybrid algorithms and describes hybrid pso gwo algorithm (hpsogwo). the formulation of the mathematical models for the proposed problems have been presented in section 5. section 6 analyses the results of the optimization techniques used. section 7 presents the conclusion and scope for further results. 3. particle swarm optimization technique (pso) particle swarm optimization (pso) simulates the social behaviour of birds of a flock. (kennedy & eberhart, 1997; hikita et al., 1992; pant & singh, 2011; abd-elazim & ali, 2015). it is a population-based optimization technique. the randomly generated population of the initial swarm or the particles and their random velocities start the initial process of algorithm. pbest represents the personal best position of each particle whereas gbest denotes the particle with the best value of fitness and hence called the global best particle. in the 𝐷-dimensional search space 𝑋𝑖 = (𝑥𝑖 1, 𝑥𝑖 2 , … … … … . 𝑥𝑖 𝐷)r and 𝑉𝑖 = (𝑣𝑖 1, 𝑣𝑖 2 , … … … … . 𝑣𝑖 𝐷 )r denote the position and velocity of the 𝑖𝑡ℎ particle whereas the previous best position of the 𝑖𝑡ℎ particle is denoted by 𝑃𝑖 = (𝑝𝑖 1, 𝑝 , … … … … . 𝑝𝑖 𝐷 )r . according to the fitness the best particle is denoted by 𝑃𝑔 = (𝑝𝑔 1 , 𝑝𝑔 2 , … … … … . 𝑝𝑔 𝐷 )r which is the global best particle. the change in the position and velocities are expressed by the equations: (kennedy & eberhart,1997; hikita et al., 1992; abd-elazim & ali, 2015) 𝑉𝑖𝑑 𝑘+1 = { 𝑉𝑚𝑎𝑥 , 𝑖𝑓 𝑉𝑖𝑑 𝑘+1 > 𝑉𝑚𝑎𝑥 −𝑉𝑚𝑎𝑥 , 𝑖𝑓 𝑉𝑖𝑑 𝑘+1 < − 𝑉𝑚𝑎𝑥 𝑉𝑖𝑑 𝑘+1, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 (1) 𝑥𝑖𝑑 𝑘+1(𝑡 + 1) = 𝑥𝑖𝑑 𝑘 (𝑡) + 𝑣𝑖𝑑 𝑘+1 (𝑡 + 1) (2) here 𝑖 = 1, 2, 3 … . . 𝑁; 𝑁 = swarm size, 𝑘 =iteration number, 𝑑 =1, 2, 3…. 𝐷, 𝑤 = inertia weight, (for controlling the momentum of the particle by weighing the contribution of the previous velocity), 𝑐1 and 𝑐2 are the positive acceleration coefficients; 𝑟1and 𝑟2 are the random numbers between 0and 1. the variations in 𝑐1and 𝑐2 with the time are represented by following equations respectively 𝑐1(𝑡) = 𝑐1𝑖 + (𝑐1𝑓 − 𝑐1𝑖 ) ∗ 𝐼𝑇𝐸𝑅/𝐼𝑇𝐸𝑅𝑀𝐴𝑋 (3) 𝑐2(𝑡) = 𝑐2𝑖 + (𝑐2𝑓 − 𝑐2𝑖 ) ∗ 𝐼𝑇𝐸𝑅/𝐼𝑇𝐸𝑅𝑀𝐴𝑋 (4) here, initially value of 𝑐1is kept large and value of 𝑐2 is kept small to ensure enough exploration of the search space to avoid local stagnation. this will lead to the global best solution in the long run. then, small value of 𝑐1and large value 𝑐2 leads to the optimization of complex system reliability using hybrid grey wolf optimizer 245 population best that is the global optimum solution. the maximum velocity and position that the particle can attain in each dimension are given by the equation as follows: 𝑉𝑖𝑑 𝑘+1 = { 𝑉𝑚𝑎𝑥 , 𝑖𝑓 𝑉𝑖𝑑 𝑘+1 > 𝑉𝑚𝑎𝑥 −𝑉𝑚𝑎𝑥 , 𝑖𝑓 𝑉𝑖𝑑 𝑘+1 < − 𝑉𝑚𝑎𝑥 𝑉𝑖𝑑 𝑘+1, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 (5) 𝑥𝑖𝑑 𝑘+1 = { 𝑋𝑚𝑎𝑥, 𝑖𝑓 𝑥𝑖𝑑 𝑘+1 > 𝑋𝑚𝑎𝑥 𝑋𝑚𝑖𝑛 , 𝑖𝑓 𝑥𝑖𝑑 𝑘+1 < 𝑋𝑚𝑎𝑥 𝑥𝑖𝑑 𝑘+1 , 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑤𝑒 (6) 4. grey wolf optimization 4.1. the guiding factor for the algorithm gwo presented by mirjalili et al. (2014) is an optimization technique which is based on the hierarchical behaviour and social intelligence of the wolves. the entire mechanism of hunting is carried out by the four categories of wolves together. each category of wolf has a particular role. alpha, the leader category takes the decisions regarding the whole process and the others follow them. thus, gwo algorithm based on this very principle is used to find the global optimum solution. the next in the hierarchy are the beta followed by delta and omega. these four initially become the candidates of solution and which are improved in the gradually in further iterations. 4.2. mathematical model formulation of the gwo algorithm the model comprises of:  surveying  surrounding  attacking the whole process of change of position of the attacking wolves is shown by the following equations constructed to carry out the simulation are as follows. (mirjalili et al. 2014) 𝐷 = |𝐶. 𝑋𝑝(𝑡) − 𝑋(𝑡)| (7) 𝑋(𝑡 + 1) = 𝑋(𝑡) − 𝐴. 𝐷 (8) note that in the equations, vectors are used so they are applicable to any number of dimensions. 𝑋(𝑡) , 𝑋(𝑡 + 1) show the present and the new locations of the wolf. the location of the prey is represented by the vector d. following equations are useful to calculate the value of a and c: 𝐴 = 2𝑎. 𝑟1 − 𝑎 (9) 𝐶 = 2. r2 (10) where, r1 .and r2 are random vectors in the interval [0,1]. the components of vector a are linearly decreased from 2 to 0 over the course of iterations. the value of a ranges from -2 to 2 as there are random variables in the expression. it is supposed that, alpha, beta and delta are the three best solutions in gwo as they have good idea of the negi et al./decis. mak. appl. manag. eng. 4 (2) (2021) 241-256 246 location because they are the strongest in the entire population. so, the other wolf should try to update their position as follows. 𝑋(𝑡 + 1) = 1 3 𝑋1 + 1 3 𝑋2 + 1 3 𝑋3 (11) where, 𝑋1, 𝑋2, 𝑋3 are calculated with the equations: 𝑋1 = 𝑋𝛼 (𝑡) − 𝐴1. 𝐷𝛼 𝑋2 = 𝑋𝛽 (𝑡) − 𝐴2. 𝐷𝛽 𝑋3 = 𝑋𝛿 (𝑡) − 𝐴3. 𝐷𝛿 (12) here, 𝐷𝛼 , 𝐷𝛽 , 𝐷𝛿 are calculated as follows: 𝐷𝛼 = |𝐶1. 𝑋𝛼 − 𝑋| 𝐷𝛽 = |𝐶2. 𝑋𝛽 − 𝑋| 𝐷𝛿 = |𝐶3. 𝑋𝛿 − 𝑋| (13) pseudo code of gwo is given in figure 1 (mirjalili et al., 2014). figure1. pseudo code of the gwo algorithm 4.3. balancing of the effective hunting mechanisms: it is very essential to do enough surveying before attacking the prey so as to make the hunting mechanism a success. the leading wolves decide and the wolves following the leaders can then take the appropriate positions to encircle the prey. for this parameter a has to be chosen so as to get the suitable value of a correspondingly which should be between -1 and 1. exploration is followed by exploitation. to stimulate proper exploitation of the available conditions, the parameter setting requires iai < 1. the success of the exploitation is dependent to a great extent on rigorous and balanced exploration so that the result is not stagnated and unrefined. gwo efficiently helps in achieving this. optimization of complex system reliability using hybrid grey wolf optimizer 247 5. hybrid algorithms to use the best qualities of some metaheuristics together the attention of the researchers has now been attracted by hybrid of two metaheuristic together solve s the purpose of reaching the global best solution with results much better than the individual metaheuristics as such in terms of quality, time, better convergence rate. generally, the phenomena of exploration and exploitation (eiben & schippers, 1998) are regarded as if they cannot go hand in hand and one disturbs the progress of the other. but a balance of these two phenomena actually, leads to the global optimum which is the best solution in terms of avoiding local stagnation, appropriate convergence rate and better result. in using a hybrid of two metaheuristic techniques, they can be used at two levels. one could be low level and other could be high level. along with this the hybridization could be done in two ways. one is as relay that is one after the other and the other method is coevolutionary which means the techniques hybridized are run parallelly and not one after the other. since the two different techniques are used in generating the final solution of the problem so it is said that a hybrid is a mixed kind of a technique. now here the challenge lies in choosing the appropriate level to which the techniques are used. as well as, the suitable method used either relay of parallel. a slight difference in the choices made could lead to the global best solution or no better solution at all. some of the hybrid techniques used successfully so far by the researchers are gwo-aco (ab rashid, 2017), gwo-ga (singh & singh, 2017), gwo-ann (tawhid & ali, 2017) and pso-aco (holden & freitas, 2008). to enable the process of exploitation, hybrids of pso have been developed by many researchers. mirjalili & hashmi, (2010) proposed hybrid pso with gravitational search algorithm (gsa) that is pso-gsa to combine the advantages of the pso with those of the gsa for better performance to escape from local convergence. hpso by ahmed et al., (2013) aims at using particle swarm optimization with (pso) with genetic algorithm (ga) mutation technique which give much better result than the pso. abd-elazim & ali, (2015) introduced a hybrid of bacterial foraging optimization algorithm (bfoa) and pso called bacterial swarm optimization (bso) which has proved to be testifier in tuning with svc. in order to avoid local stagnation and obtain better quality in terms of global best and stability factor gwo has been hybridized with many other optimization techniques. 5.1 hybrid pso-gwo algorithm it is clear that to improve the convergence behaviour of the metaheuristic technique researchers have started developing hybrids of some of the meta heuristics as mentioned earlier. one of those is hybrid pso gwo technique (hpsogwo) (singh & singh, 2017). the advantage of hybridization of the pso and gwo technique is that with gwo, the exploration technique is improved as the wolves do enough exploration of the search space. whereas, pso helps in improving the exploitation so that the convergence to the solution can be achieved timely as well as to the global optimum. proper exploitation and exploration with a balance is maintained. this ultimately complements and strengthens the performance of both the techniques taken together avoiding the influence of the shortcomings in terms of local stagnation or suitable convergence rate. the modifications done in the related equations are shown by the use of an inertia weight constant. for this, the positions of the search agents are to be improved first so that the searching and the exploring process can be bettered. this will automatically control the exploitation and exploration phenomena as a whole. negi et al./decis. mak. appl. manag. eng. 4 (2) (2021) 241-256 248 introduction of the inertia constant to control the surveying and attacking processes of the wolves can be expressed as follows. (singh & singh, 2017). 𝐷𝛼 = |𝐶1. 𝑋𝛼 − 𝑤 ∗ 𝑋| 𝐷𝛽 = |𝐶2. 𝑋𝛽 − 𝑤 ∗ 𝑋| 𝐷𝛿 = |𝐶3. 𝑋𝛿 − 𝑤 ∗ 𝑋| (14) to enhance the exploitation capacities of the pso the velocity and upgraded locations of the search agents are expressed by the equations as follows: 𝑉𝑖 𝑘+1 = 𝑤 ∗ {𝑉𝑖 𝑘 + 𝑐1𝑟1(𝑥1 − 𝑥𝑖 𝑘 ) + 𝑐2𝑟2(𝑥2 − 𝑥 𝑖 𝑘 ) + 𝑐3𝑟3(𝑥3 − 𝑥𝑖 𝑘 )} (15) 𝑥𝑖 𝑘+1 = 𝑥𝑖 𝑘 + 𝑣𝑖 𝑘+1 (16) pseudo code of the hpsogwo algorithm is given in figure 2. (singh & singh, 2017). figure 2. pseudo code of the hpsogwo algorithm 6. formulation of the mathematical model for the problems last few decades have witnessed lot of research in formulation of mixed configuration as pure series or parallel configuration are not enough to design complex system of the real-world engineering problems. the following problems of mixed configuration with both series and parallel structures based on the reliability allocation have been solved using hpsogwo technique. in this paper, the two problems considered are complex bridge system and life support system in space capsule. these are nonlinear optimization problems subject to respective constraints of component reliability and system costs. 6.1 problem of complex bridge system: complex bridge system (padhye et al., 2009; pant & singh, 2011, kumar; pant & ram, 2017) has a mixed configuration of series and parallel. the system has a total of optimization of complex system reliability using hybrid grey wolf optimizer 249 . 2 1 4 1 4 2 3 five components (fig.3). the system reliability (rs) and system cost (cs) of a complex bridge network are given below. 𝑅𝑠 = 𝑟1𝑟4 + 𝑟2𝑟5 + 𝑟2𝑟3𝑟4 + 𝑟1𝑟3𝑟5 + 2𝑟1𝑟2𝑟3𝑟4𝑟5 − 𝑟1𝑟2𝑟4𝑟5 − 𝑟1𝑟2𝑟3𝑟4 − 𝑟2𝑟3𝑟4𝑟5 − 𝑟1𝑟2𝑟3𝑟5 𝑟1𝑟3𝑟4𝑟5 (17) 𝐶𝑠 =∑ 𝑎𝑖 5 𝑖=1 exp[ 𝑏 (1−𝑟𝑖) ] (18) the optimization problem in mathematical form is as under: minimize 𝐶𝑠 subject: 0 ≤ 𝑟𝑖 ≤ 1 𝑖 = 1, 2, 3, 4, 5 0.99 ≤ 𝑅𝑠 ≤ 1 𝑎𝑖 = 1, and 𝑏𝑖 = 0.0003, for 𝑖 = 1,2, 3, 4, 5 where, 𝑅𝑖 is 𝑖 𝑡ℎ component’s reliability. 6.2. problem of space capsule: life support system in space capsule (anthony, 2006) presented below is composed of 4 components (fig. 4). this mixed seriesparallel system is used for space exploration and the related equations are as follows: (kumar et al., 2017) 𝑅𝑠 = 1 − 𝑟3 [(1 − 𝑟1)(1 − 𝑟4)] 2 − (1 − 𝑟3)[1 − 𝑟2{1 − (1 − 𝑟1)(1 − 𝑟4)}] 2 (19) 𝐶𝑠 =2𝐾1 𝑟1 𝛼1 + 2 𝐾2 𝑟2 𝛼2 + 𝐾3 𝑟3 𝛼3 + 2 𝐾4 𝑟4 𝛼4 (20) where, 𝐾1 = 100, 𝐾2 = 100, 𝐾3 = 200, 𝐾4 = 150 and 𝛼𝑖 = 0.6, 𝑖 = 1, 2, 3, 4, 5 minimize 𝐶𝑠 subject: 0.5 ≤ 𝑟𝑖 ≤ 1 𝑖 = 1, 2, 3, 4, 5 0.9 ≤ 𝑅𝑠 ≤ 1. figure 3. complex bridge system figure 4. life support system in a space capsule 7. result analysis for the above-mentioned problems of reliability allocation, we employed the simplest penalty functions method for constraints handling and the hpsogwo algorithm has been implemented in matlab with number of grey wolves fixed same as gwo and the best results obtained are reported in table 1 & table 2. outin 1 4 52 3 outin 1 4 52 3 negi et al./decis. mak. appl. manag. eng. 4 (2) (2021) 241-256 250 analysis of the results of problem 6.1 complex bridge systems shows that with hpsogwo for 400 iterations and population size of 100 (total number of grey wolf) the number of function evaluations is 40,000 with same reliability as with pso and gwo individually (table 1). so, this result is better than that of gwo with regards to number function evaluations (figure 5). table 1: result comparison for complex bridge system complex bridge system pso gwo hpsogwo 𝒓𝟏 0.9348210000 0.9341000000 0.9308565080 𝒓𝟐 0.9350280000 0.9363500000 0.9399944690 𝒓𝟑 0.7919480000 0.7913700000 0.8094644730 𝒓𝟒 0.9350050000 0.9338800000 0.9354764350 r5 0.9347350000 0.9356500000 0.9313288850 no. of iterations 300 300 200 𝑹𝒔 0.99000500000 0.99002800000 0.99000033494 𝑪𝒔 5.01991800000 5.01990000000 5.066228730000 fe 1,20,000 9000 6000 figure 5. search history for problem 6.1 analysis of the result of problem 6.2 life support system in a space capsule shows that with hpsogwo for 200 iterations and population size of just 30 (total number of grey wolf) the result is quite competitive in terms of higher reliability cost minimisation and the number of function evaluations is just 6000 which is far better than the results obtained with pso and gwo individually (figure 6). the results obtained are shown in table 2. optimization of complex system reliability using hybrid grey wolf optimizer 251 table 2: result comparison for life support system in a space capsule life support system in a space capsule pso gwo hpsogwo 𝒓𝟏 0.500000000 0.500000000 0.500000000 𝒓𝟐 0.838924024 0.838920000 0.838924024 𝒓𝟑 0.500000000 0.500000000 0.500000000 𝒓𝟒 0.500000000 0.500000000 0.500000000 no. of iterations 300 500 400 𝑹𝒔 0.900000000 0.900000000 0.900000000 𝑪𝒔 641.823562000 641.823600000 641.823562000 fe 2040 50,000 40000 figure 6. search history for problem 6.2 thus, hybrid of pso and gwo gives an overall better performance than the individual optimization technique. a comparison of the results clarifies that though results are better than other optimization techniques, but hpsogwo results better in one or the other form all the previously obtained results. from the results, it is clear, that hpsogwo gives comparatively much better results than other metaheuristics like pso, gwo individually used earlier for such complex reliability allocation problems in terms of the lesser number of function evaluations (table 1 & table 2). 8. conclusion and further scope nature inspired optimization algorithms have extended their roots in almost all complex optimization problems of modern-day industries. reliability allocation problems which are usually np-hard in nature are one of them. negi et al./decis. mak. appl. manag. eng. 4 (2) (2021) 241-256 252 in this article, a hybrid algorithm named hpsogwo has been used to solve two complex reliability allocation problems named complex bridge system and life support system in space capsule. hpsogwo algorithm has proved superior or comparable overall in terms of lesser number of function evaluation as compared to gwo and pso. also, the technique can serve the solutions to the various reliability allocation problems (raps) and reliability-redundancy allocation problem (rraps) with the use of a proper penalty function. as further scope, the decision makers can decide the allocation of the desired reliability of the components as well as the whole complex system which can be optimized using hpsogwo. together with this the repair and maintenance of the components also can be a part of decision as the reliability of the whole system can be managed better to get competitive results with the hpsogwo technique. currently, the authors are working on numerous improvements related to the benchmark problems in reliability allocation problems (raps) and reliability-redundancy allocation problem (rraps). author contributions: each author has participated and contributed sufficiently to take public responsibility for appropriate portions of the content. funding: this research received no external funding. declaration of conflicting interests: the authors have no conflict of interests. references ab rashid, m.f.f. (2017). a hybrid ant-wolf algorithm to optimize assembly sequence planning problem. assembly automation, 37(2), 238–248. abd-elazim, s.m. & ali, e.s. (2015). a hybrid particles swarm optimization and bacterial foraging for power system stability enhancement. complexity, 21(2), 245– 255. abdullah, j. m. & ahmed, t. (2019). fitness dependent optimizer: inspired by the bee swarming reproductive process. ieee access, 7, 43473-43486. ahmed, a., esmin, a. & matwin, s. (2013). hpsom: a hybrid particle swarm optimization algorithm with genetic mutation. international journal of innovative computing, information and control, 9(5), 1919–1934. atiqullah, m. m., & rao, s.s. (1993). reliability optimization of communication networks using simulated annealing. microelectronics reliability, 33, 1303-1319. coelho, l.s. (2009). an efficient particle swarm approach for mixed integer programming problem in reliability-redundancy optimization applications. reliability engineering and system safety, 94 (4), 830-837. deep, k., & deepti. d. (2009). reliability optimization of complex systems through csomga. journal of information and computing science, 4, 163-172. dorigo, m., & gambardella, l.m. (1997). ant colony system: a cooperative learning approach to the traveling salesman problem. ieee trans evol comput, 1(1), 53–66. https://ieeexplore.ieee.org/author/37086811234 https://ieeexplore.ieee.org/author/37086812554 optimization of complex system reliability using hybrid grey wolf optimizer 253 eberhart, r. & kennedy, j. (1995). a new optimizer using particle swarm theory. mhs'95. in proceedings of the sixth international symposium on micro machine and human science, 39-43, doi: 10.1109/mhs.1995.494215. eiben, a. e. & schippers, c.a. (1998). on evolutionary exploration and exploitation. fundamental informatics, 35(1–4), 35–50. fouad, m.m, hafez, a.i., hassanien, a.e. & snasel, v. (2015). grey wolves optimizerbased localization approach in wsns. in: 11th international computer engineering conference (icenco). ieee, 256–260. hassan, b.a., & rashid, t.a. (2021). evolutionary clustering algorithm (eca). neural computing and applications; https://doi.org/10.1007/s00521-020-05649-1 hikita, m., nakagawa, y., nakashima, k., & narihisa, h. (1992). reliability optimization of systems by a surrogate-constraints algorithm. ieee transactions on reliability, 41, 473-480. hikita, m., nakagawa, y., nakashima, k., & yamato, k. (1986). application of the surrogate constraints algorithm to optimal reliability design of systems. microelectronics and reliability,26, 35-38. holden, n., & freitas, a.a. (2008). a hybrid pso/aco algorithm for discovering classification rules in data mining. journal of artificial evolution and applications. article id 316145, 1-11. hu, x., & eberhart, r. (2002), adaptive particle swarm optimization: detection and response to dynamic systems. in congress on evolutionary computation, 2, 16661670. ibrahim, g.j., rashid, t. a., & akinsolu, m. o. (2020). an energy efficient service composition mechanism using a hybrid meta-heuristic algorithm in a mobile cloud environment. journal of parallel and distributed computing, 143. jayabarathi, t., raghunathan, t., adarsh, b.r., & suganthan, p. n. (2016). economic dispatch using hybrid grey wolf optimizer. energy, 111, 630–641. kennedy, j., & eberhart, r. (1997). a discrete binary version of the particle swarm algorithm, in ieee international conference on systems, man, and cybernetics, computational cybernetics and simulation, 5, 4104-4108. kishor, a., yadav, s. p., & kumar, s. (2007). application of a multi-objective genetic algorithm to solve reliability optimization problem, in international conference on computational intelligence and multimedia applications, 458-462. kishor, a., yadav, s. p., & kumar, s. (2009). a multi-objective genetic algorithm for reliability optimization problem. international journal of performability engineering, 5, 227–234. kumar, a., pant, s., & ram, m. (2017). system reliability optimization using grey wolf optimizer algorithm. quality and reliability engineering international, 33(7), 13271335. kumar, a., pant, s., & ram, m. (2019a). grey wolf-optimizer approach to the reliability‐ cost optimization of residual heat removal system of a nuclear power plant safety system. quality and reliability engineering international, 35 (7), 2228-2239. file:///c:/users/hp/downloads/neural%20computing%20and%20applications file:///c:/users/hp/downloads/neural%20computing%20and%20applications https://doi.org/10.1007/s00521-020-05649-1 https://www.sciencedirect.com/science/article/abs/pii/s0743731520302744?via%3dihub#! https://www.sciencedirect.com/science/article/abs/pii/s0743731520302744?via%3dihub#! https://www.sciencedirect.com/science/article/abs/pii/s0743731520302744?via%3dihub#! negi et al./decis. mak. appl. manag. eng. 4 (2) (2021) 241-256 254 kumar, a., pant, s., & ram, m. (2019b). multi-objective grey wolf optimizer approach to the reliability-cost optimization of life support system in space capsule. international journal of system assurance engineering and management, 10 (2), 276284. kumar, a., pant, s., & singh, s. b. (2016). reliability optimization of complex system by using cuckoos search algorithm, mathematical concepts and applications in mechanical engineering and mechatronics, igi global, 95-112 kuo, w., & prasad, v.r. (2000). an annotated overview of system-reliability optimization. ieee transactions on reliability, 49, 176-187. li, l., xue, b., niu, b., tan, l., & wang, j. (2008). a novel pso-de based hybrid algorithm for global optimization in advanced intelligent computing theories and applications: with aspects of artificial intelligence, vol. 5227 of lecture notes in computer science, 785–793, springer, berlin, germany. majety, s. r. v., dawande, m., & rajgopal, j.(1999). optimal reliability allocation with discrete cost-reliability data for components. operations research, 47, 899-906. mirjalili, s. m., & hashim, s. z .m. (2010). a new hybrid psogsa algorithm for function optimization, in proceedings of the international conference on computer and information application (iccia ’10), pp. 374–377, tianjin, china. mirjalili, s., mirjalili, s. m., & lewis, a. (2014). grey wolf optimizer. adv eng. soft, 69, 46–61. mirjalili, s., saremi, s., mirjalili, s. m., & coelho, l. s. (2016). multi-objective grey wolf optimizer: a novel algorithm for multi-criterion optimization. expert sys app, 47, 106– 119. misra, k. b., & sharma, u. (1991a). an efficient approach for multiple criteria redundancy optimization problems. microelectronics reliability, 31, 303-321. misra, k. b., & sharma, u. (1991b). multicriteria optimization for combined reliability and redundancy allocation in systems employing mixed redundancies. microelectronics reliability, 31, 323-335. mohammed, h. m., abdul, z. k., rashid, t. a., alsadoon, a., & bacanin, n. (2021). a new k-means grey wolf algorithm for engineering, world journal of engineering issn: 1708-5284 mohammed, h. m., umar, s. u., tarik a., & rashid, a. (2019). systematic and metaanalysis survey of whale optimization algorithm. computational intelligence and neuroscience. https://doi.org/10.1155/2019/8718571. mohammed, h., & rashid, t. (2020). a novel hybrid gwo with woa for global numerical optimization and solving pressure vessel design. neural computing and applications, 32, 14701–14718. mohan, c., & shanker. k. (1987). reliability optimization of complex systems using random search technique. microelectronics reliability, 28, 513-518. mosavi, m, r., khishe, m., & ghamgosar, a. (2016). classification of sonar data set using neural network trained by grey wolf optimization. neural net world, 26(4), 393. https://www.emerald.com/insight/search?q=hardi%20m.%20mohammed https://www.emerald.com/insight/search?q=zrar%20kh.%20abdul https://www.emerald.com/insight/search?q=tarik%20a.%20rashid https://www.emerald.com/insight/search?q=abeer%20alsadoon https://www.emerald.com/insight/search?q=nebojsa%20bacanin https://www.emerald.com/insight/publication/issn/1708-5284 https://www.hindawi.com/journals/cin/ https://www.hindawi.com/journals/cin/ https://doi.org/10.1155/2019/8718571 https://link.springer.com/article/10.1007%2fs00521-020-04823-9#auth-hardi-mohammed https://link.springer.com/article/10.1007%2fs00521-020-04823-9#auth-tarik-rashid https://link.springer.com/journal/521 https://link.springer.com/journal/521 optimization of complex system reliability using hybrid grey wolf optimizer 255 muhammed, d.a., saeed, s.a.m., & rashid, t.a. (2020). improved fitness-dependent optimizer algorithm. ieee, 8. mukherjee, a., barma, p.s., dutta, j., panigrahi, g., kar, s., & maiti, m. (2021). a multiobjective antlion optimizer for the ring tree problem with secondary sub-depots, https://link.springer.com/article/10.1007/s12351-021-00623-8 negi, g., kumar, a., pant, s., & ram, m. (2020). gwo: a review and applications, international journal of system assurance engineering and management, https://doi.org/10.1007/s13198-020-00995-8. padhye, n., branke, j. & mostaghim, s. (2009). empirical comparison of mopso methods-guide selection and diversity preservation, in ieee congress on evolutionary computation, 2516-2523. pant, s. & singh, s.b. (2011). particle swarm optimization to reliability optimization in complex system, in the proceeding of ieee int. conf. on quality and reliability, bangkok, thailand, 211-215. pant, s, anand, d., kishor, a., & singh, s.b. (2015). a particle swarm algorithm for optimization of complex system reliability. international journal of performability engineering, 11(1), 33-42 pant, s., kumar, a., & ram, m. (2019). solution of nonlinear systems of equations via metaheuristics. international journal of mathematical, engineering and management sciences, 4 (5), 1108-1126. pant, s., kumar, a. & ram, m. (2017). flower pollination algorithm development: a state of art review. international journal of system assurance engineering and management, springer, 8 (2), 1858-1866. pham, h., pham, h. k, & amari, s. v. (1995). a general model for evaluating the reliability of telecommunications systems. commun reliab maintain support-int j; 2:4–13. rahman, c.m. & rashid, t. a, (2020). learner performance-based behaviour algorithm (lpb) https://doi.org/10.1016/j.eij.2020.08.003 ramírez-rosado, i. j. & bernal-agustín, j. l. (2001). reliability and costs optimization for distribution networks expansion using an evolutionary algorithm. ieee transactions on power systems, 16, 111-118. rashid, t. a, abbas, d. k. & turel, y.k. (2019). a multi hidden recurrent neural network with modified grey wolf optimizer. https://doi.org/10.1371/journal.pone.0213237 sakawa, m. (1978). multi-objective reliability and redundancy optimization of a seriesparallel system by the surrogate worth trade-off method. microelectronics and reliability, 17, 465-467. salazar, d., rocco, c. m., & galván, b. j. (2006). optimization of constrained multipleobjective reliability problems using evolutionary algorithms. reliability engineering & system safety, 91, 1057-1070. shelokar, p. s, jayaraman, v. k., & kulkarni, b. d. (2002). ant algorithm for single and multi-objective reliability optimization problems. quality and reliability engineering international, 18, 497-514. https://ieeexplore.ieee.org/author/37087408603 https://ieeexplore.ieee.org/author/37087403824 https://ieeexplore.ieee.org/author/37085680734 https://link.springer.com/article/10.1007/s12351-021-00623-8#auth-anupam-mukherjee https://link.springer.com/article/10.1007/s12351-021-00623-8#auth-goutam-panigrahi https://link.springer.com/article/10.1007/s12351-021-00623-8#auth-samarjit-kar https://link.springer.com/article/10.1007/s12351-021-00623-8#auth-manoranjan-maiti https://link.springer.com/article/10.1007/s12351-021-00623-8 https://doi.org/10.1007/s13198-020-00995-8 https://www.sciencedirect.com/science/article/pii/s1110866520301419#! https://doi.org/10.1016/j.eij.2020.08.003 https://doi.org/10.1371/journal.pone.0213237 negi et al./decis. mak. appl. manag. eng. 4 (2) (2021) 241-256 256 singh, n. & singh, s. b. (2017). hybrid algorithm of particle swarm optimization and grey wolf optimizer for improving convergence performance. journal of applied mathematics, article id 20304889, 1-16. tawhid, m. a. & ali, a. f. (2017). a hybrid grey wolf optimizer and genetic algorithm for minimizing potential energy function. memetic computing, 9(4), 347–359. uniyal, n. pant, s., & kumar, a. (2020). an overview of few nature inspired optimization techniques and its reliability applications. international journal of mathematical, engineering and management sciences, 5 (4), 732-743. wolpert, d. h., & macready, w. g. (1997). no free lunch theorems for optimization. ieee transactions on evolutionary computation; 1, 67-82. yang, x.s., and deb, s. (2009) ‘cuckoo search via l´evy flights’, proceedings of world congress on nature & biologically inspired computing (nbic, india), ieee publications, usa, 210-214. zha, j. h. liu, z., & dao, m. t. (2007). reliability optimization using multi-objective ant colony system approaches, reliability engineering & system safety. 92, 109-120. © 2021 by the authors. submitted for possible open access publication under the terms and conditions of the creative commons attribution (cc by) license (http://creativecommons.org/licenses/by/4.0/). https://www.researchgate.net/profile/nitin_uniyal/publication/342123987_an_overview_of_few_nature_inspired_optimization_techniques_and_its_reliability_applications/links/5ee36805299bf1faac4e8998/an-overview-of-few-nature-inspired-optimization-techniques-and-its-reliability-applications.pdf https://www.researchgate.net/profile/nitin_uniyal/publication/342123987_an_overview_of_few_nature_inspired_optimization_techniques_and_its_reliability_applications/links/5ee36805299bf1faac4e8998/an-overview-of-few-nature-inspired-optimization-techniques-and-its-reliability-applications.pdf plane thermoelastic waves in infinite half-space caused decision making: applications in management and engineering vol. 1, issue 2, 2018, pp. 1-15 issn: 2560-6018 eissn: 2620-0104 doi: https://doi.org/10.31181/dmame1802001m selection of the railroad container terminal in serbia based on multi criteria decision-making methods milan milosavljević1*, marko bursać1, goran tričković1 1 the school of railway applied studies, zdravka čelara 14, 11000 belgrade, serbia received: 6 november 2017; accepted: 25 may 2018; available online: 27 may 2018. original scientific paper abstract: intermodal transport is one of the key elements for sustainable freight transport at large and medium distances. however, its efficiency in many cases depends on the location of the railroad container terminals (ct). the favorable position of serbia provides an opportunity to establish a large number of container trains, which can lead to a more developed intermodal transport system in the entire balkans and beyond. in this paper the problem of the container terminal location in serbia has been considered and resolved. the aim of this paper is to determine the potential macro location of the ct in serbia, which will be most suitable for different stakeholders in the transport chain. choosing the most suitable alternative is a complex multi-criteria task. for this reason, a multi criteria decision-making model has been formulated which consists of a number of alternatives and criteria. alternatives represent potential areas for a site, while some of the criteria are: cargo flows, infrastructure, economic development, social and transport attractiveness and environmental acceptability. for defining weights of the criteria two approaches are used, namely, the delphi and the entropy method. in this paper three methods of the multi criteria decision-making, namely, topsis, electre and mabac are used. by comparing the results of these three methods, an answer to the question where to locate ct will be presented. this is the first step in determining the location of the container terminal. the next phase should respond to the issue of micro location of the terminal. also, after certain customization, the model can be used for solving other categories of location problems. key words: location model, container terminal, topsis, electre, mabac. * corresponding author. e-mail addresses: mimilan89@gmail.com (m. milosavljević), markobursac1987@gmail.com (m. bursać), tricko86@gmail.com (g. tričković) mailto:mimilan89@gmail.com mailto:markobursac1987@gmail.com mailto:tricko86@gmail.com milosavljević et al./decis. mak. appl. manag. eng. 1 (2) (2018) 1-15 2 1. introduction the efficiency of intermodal transport largely depends on the location of the container terminals. the sustainability of transport in europe requires an increasing reallocation between different modes of transport in order to reduce traffic congestion and environmental protection. therefore, the choice of the most favorable location of the railroad terminal is one of the most important strategies for optimization of the entire transport chain. due to its favorable geographical position and important transport corridors located on its territory, the republic of serbia has a great potential for developing intermodal transport. considering that there is almost no such type of terminal in serbia, along with the tendency to join the european transport network, the aim of this paper is to determine the potential location of ct. there is a number of developed methods used for finding the most suitable location of the terminal, such as standard methods for finding the optimal location defined as the p-median problem (limbourg & jorquin, 2009). klose & drexl (2005) deals with different location problems formulated as optimization ones. in addition, a large number of location problems are solved using multi criteria decision-making methods. unlike conventional methods and techniques of operational research, these methods do not provide for an „objectively the best” solution. these methods are based on mathematical algorithms that are developed to help decision-makers in choosing the most suitable variant. there is a large number of papers devoted to this issue, such as determining the location of the logistic center based on electre method (žak & weglinski, 2014), location of logistic center on the black sea in turkey (uysal &yavuz, 2014), electre i method (maroi et al., 2017), determining the location of the main postal center using topsis method (miletić, 2007), logistic center location in the area of western serbia (tomić et al., 2014), location problem based on ahp method (stević et al., 2015). some authors have compared several multi criteria methods, doing, for example, a comparative analysis of two weighting criteria methods entropy and critic for air conditioner selection using moora and saw (vujičić et al., 2017). more recently, combinations of multi criteria decision-making techniques and fuzzy logic are used for solving location problems (tadić et al., 2015), fuzzy-topsis method for selecting hospital locations (senvar et al., 2016), fuzzy-ahp method for determining solar fields location (asakereh et al., 2017). in addition to conventional methods, there are also others such as the mabac for solving location problems of wind farms in vojvodina (gigović et al., 2017), copras-g method for container terminal operations optimization (barysiene, 2012), hybrid fuzzy-aph-mabac model for selecting the location of masking bindings (božanić et al., 2016), selection of transport and handling resources in logistics centers (pamučar et al., 2015) and the like. 2. problem formulation the observed problem lies in the selection of the most suitable location/region on which the railroad terminal will be located. as a potential location for this terminal, railway sections from serbia are used, as well as the areas in which these sections are located. total numbers of variants are 11, although the serbian railway network is divided into 12 sections: požarevac, lapovo, niš, zaječar, kraljevo, užice, pančevo, zrenjanin, novi sad, subotica and ruma. belgrade railway section was not taken into selection of the railroad container terminal in serbia based on multi criteria decision-making… 3 consideration due to the existence of a container terminal in belgrade in belgrade marshalling yard „makiš“. 2.1. definition of variants for each variant, a railway section is associated with a particular area in which the section is located although the boundaries of the section are different in terms of administrative division. the data about loading and unloading railway freight cars are based on the real railway sections although they cross the administrative boundaries of the area, while the other data used in this paper are taken from the areas in which the section is located. variant 1 subotica is a railway section located in the northern part of serbia and it is the administrative center of severna bačka district. its total area is close to 1784 km2, and its population amounts to 186 906 people. the region is characterized as average in many regards. it is characterized by an average level of economic development, annual gdv per capita of 429 000 rsd and logistical and transport activities imply one important road and rail corridor. the main advantage of this variant is high investment attractiveness because of two free zones, subotica and apatin. the unemployment rate in this region ranks among the lowest in the country (10,7%). the volume of transported goods and number of freight cars are the lowest (4599 freight cars 126 277 t), while in the case of unloading goods in domestic and international traffic region it is in the pre-position. variant 2 novi sad is the capital of južna bačka district. population in this area amounts to 615 371 people, while the total area is close to 4026 km2. the economic potential is high, when considering gdv per capita of 608 000 rsd, which is of the highest value in the whole territory of serbia, without belgrade. novi sad offers a great opportunity for education of younger people with the highest number of high schools and faculties. the total volume of all transported goods in this section is average and close to 890 819 t, and 23675 used freight cars. through the novi sad pass international road corridor e75 and railway corridor e85. the weakness of variant 2 is a high unemployment rate of 15,9% and existence of one free zone novi sad. the region is attractive is terms of environment-friendliness with low noise emission and national park fruška gora. variant 3 zrenjanin is the capital of srednji banat district, located in the northeast part of serbia. its total area is 3257 km2 and its population amounts to 187 667 people. the region is characterized by a high unemployment rate of 14,1% which places this variant at the very top according to this criterion. gdv per capita is 416 000 rsd, while transport and logistic competitiveness is small because there is no large number of economic entities. although the volume of railway transport has been growing in recent years, this section is at very bottom for number of loaded freight cars. with 5644 unloaded cars and 152 492 t of transported goods this region occupies the lowest position. transport infrastructure in variant 3 is in a very poor condition. there is only one international railway line, while there are no state ia roads. this area is environment-friendly. variant 4 pančevo is the capital of južni banat district, with population of 293 730 people and an area close to 4246 km2. the economic potential of this variant is slightly lower than average because of gdv per capita which is 384 000 rsd, and a huge unemployment rate of 20,9%. another weakness of this variant is a very poor condition of transport infrastructure and connection with other nearby cities. availability of transport infrastructure is lower than average with two international railway lines and no state ia roads. investment attractiveness is low because there is milosavljević et al./decis. mak. appl. manag. eng. 1 (2) (2018) 1-15 4 a large number of business subjects. azotara, petrohemija and oil refinery in combination with the port are some of the subjects that can contribute positively to this variant. unfortunately, it does not possess free zones. the total number of loaded and unloaded freight cars in domestic and international transport is 43849 with 1 600 600 t of transported goods. variant 5 ruma is located in the north-eastern part of the country, and it is the capital of srem district. its area is around 3485 km2 and its population amounts to 312 278 people. the region is characterized by a higher than average level of gdv per capita is 411 000 rsd, and a higher unemployment rate of 18,3%. near to this region is šabac free zone which increases investment attractiveness. variant 5 is environment-friendly with a low level of noise. the industrial attractiveness of this variant is reflected in the number of transported goods, which amounts to 1 102 168 t in 2016 and 30 398 used freight cars. variant 6 požarevac is located in the region of braničevo. its total area is 3857 km2, and its population amounts to 183 625 people. this variant has a low unemployment rate of 11% and large industrial attractiveness. with 89 877 freight cars and 3 154 202 t transported cargo, this is the first of all the variants. the reason for this is a steel company in smederevo, which uses two railway stations radinac and smederevo. near to smederevo passes european corridor e75 as well as state ia road and railway lines e70 and e85. variant 7 zaječar is located in the eastern part of the country in the region of zaječar. the total area of the region is 3624 km2 and the population is close to 119 967 people. gdv per capita is 314 000 rsd which is the second lowest value. the unemployment rate is 18,3%, but this variant has a big potential which is evident in a small number of logistic and transport companies and business subjects. the weakness of this variant is that both road and rail transport infrastructures are undeveloped; there are no state ia roads while there is only one railway line. industrial attractiveness is good because of the mines in bor and majdanpek, and the total number of used freight cars is 58602 with 1 508 932 t. no free zones are in this region, either. variant 8 lapovo is the railway section which is located in the central part of serbia in the region of pomoravlje. the total area of this section is 2614 km2 and its population amounts to 71 231 people. this section is located near two state ia roads, and railway corridors e70 and e85. the unemployment rate is huge (19%) and gdv per capita is 322 000 rsd. investment attractiveness is average. svilajnac free zone is located in this region. number of used freight cars is 23562, and total volume of transported goods is 946 831 t. variant 9 niš is the railway section which covers the southern part of the country; it is the center of the region of niš. the total population of the region is 376 319 people while the total area is 2728 km2. this variant has the highest unemployment rate in serbia 24,7%. gdv per capita is 348 000 rsd, and there are two free zones, pirot and vranje. with 14 faculties and higher schools this region attracts a lot of young people and offers them a great opportunity for education. volume of loaded and unloaded cargo is very small amounting to 202 385 t loaded cargo and 499 144 t unloaded cargo. there are two road corridors and three important railway lines. variant 10 kraljevo section is located in the region of raška. population of this region is 309 258 people and the total area of the region is 3923 km2. it is characterized by a low level of gdv per capita of 240 000 rsd. the region is attractive from the logistic and transport point of view. its benefits are big industrial companies and centers located in kragujevac as well as the existence of two free selection of the railroad container terminal in serbia based on multi criteria decision-making… 5 zones, kruševac and kragujevac. total volume of transported goods in 2016 was 765 523 t. the weaknesses of this region are: a relatively poor condition of the transport infrastructure and serious social problems, including a very high unemployment rate of 21,6%. no highways in this region; the railway line in this variant is in a very bad condition. the region is considered to be environment-friendly because of national park kopaonik and a low level of noise. variant 11 užice is the railway section which is located in western part of serbia in the region of zlatibor. this region has the largest area close to 6140 km2. total population is 286 549 people according to 2011 population census. unemployment rate is 15% and gdv per capita is 369 000 rsd. the level of logistics and transport competitiveness is small which makes this region favorable only in terms of its location. volume of transport is 1 051 473 t in 2016. railway line belgrade bar is in a very bad condition while a highway from belgrade to bar is under construction. 2.2. formulation of criteria c1 availability of transport infrastructure (points). this maximized criterion is defined as number of state ia roads and international railway lines that pass through each region or section of the railway network. it measures region accessibility and transport efficiency for distributing goods. also, it shows the condition of the road and rail infrastructure, taking into account water traffic in the case there is a port of terminal in the same region. the criterion is measured on the scale 1-6, whereby point 1 is given for a region with the lowest numbers of corridors and the worst infrastructure condition; point 6 is given, consequently, for the best region. c2 economic development (in thousand rsd). this maximized criterion is defined as an annual value of gdv per capita for each region in serbia. based on this criterion, we can measure the economic potential of each region, i.e. it can be determined whether an investor would like to invest in the given region or not. c3 investment attractiveness (points). this maximized criterion uses the measurement scale of 1 to 10 points for assessment of the overall level of attractiveness of the region. it is defined as a total number of free zones in regions and close to regions. c4 level of transport and logistics competitiveness (points). this minimized criterion is defined on the scale of 1 to 10 and it shows share of logistic and transport companies and business subjects in the region compared to their total number in serbia. this criterion is minimized because any new investor shall first opt for the region with no competition whatsoever. the data necessary for this criterion were based on experience and interviews with experts. c5 transport and logistics attractiveness (t). this criterion measures the industry attractiveness of each region (max). it is expressed in total loaded and unloaded weight and transported by rail in domestic and international transport. unfortunately, this criterion does not include data about transported goods by road. also, given that statistics about transported containers and volume of transport goods in transit on the serbian railway network are only conducted for the whole network, this data are not relevant and have not been taken into account when settling the problem. c6 unemployment rate (%). this minimized criterion is defined as a percentage of unemployed residents in the region. the level of social satisfaction affects the region. this criterion can be defined by the components such as opportunities for education and career development (number of state faculties and high schools). milosavljević et al./decis. mak. appl. manag. eng. 1 (2) (2018) 1-15 6 c7 – environment-friendliness (points). this criterion (max) defines the environment-friendliness of each region. it includes an average daily and night level of noise in the centers of regions and the number of fully protected territories like national parks. 3. a multi criteria decision-making model existence of a multi criteria analysis means existence of more variants and criteria, of which some have to be minimized or maximized, where decisions are made in conflict conditions with the application of instruments that are more flexible than the mathematical method of pure optimization. criteria that are to be maximized are in the profit criteria category although they may not necessarily be profit criteria. similarly, the criteria that are to be minimized are in the cost criteria category. an ideal solution would maximize all the profit criteria and minimize all the cost criteria. normally, this solution is not obtainable. in literature a large number of methods of multi criteria analysis can be found. however, not all the methods are equally theoretically and practically represented and important. there are two types of multi criteria decision-making methods. one is compensatory and the other is a non-compensatory one. compensatory methods are those which calculate the final solution by tolerating some of bad features of a variant under the condition that all other features of this variant are favorable. they actually permit „tradeoffs“ between attributes. a slight decline in one attribute is acceptable if it is compensated by some enhancement in one or more of other attributes. some of these methods are (dimitrijević, 2016):  simple additive weighting (saw),  technique for order preference by similarity to ideal solution (topsis),  preference ranking organization method of enrichment evaluation (promethee),  analytic hierarchy process (ahp), and,  elimination et choix traduisant la realite (electre). in addition to these conventional methods, the following methods are increasingly used:  evaluation based on distance from average solution (edas),  complex proportional assessment (copras),  evaluation of mixed data (evamix),  combinative distance-based asessment (codas),  weighted aggregated sum product assessment (waspas), and,  multi-attribute border approximation area comparison (mabac). the presented model of macro location of the container terminal was done using three compensatory methods, i.e., topsis, electre and mabac, after which the results are compared by methods, and the most favorable variant was adopted for the macro location of the container terminal in serbia. these methods are used because of their common use in solving this type of problem in addition to their simple use and easy definition of input parameters. models are solved by microsoft excel, i.e. its addition for a multi criteria analysis which is called sanna. the aim of this paper is to compare 11 variants, which represent sections on the railway network, in order to find an optimal solution for the railroad container terminal location. these sections are district control offices, from which the management of a certain part of the railway network is performed. there are twelve sections on the serbian railway network, but in this model section belgrade is not selection of the railroad container terminal in serbia based on multi criteria decision-making… 7 used because there is already a railroad container terminal in belgrade marshaling yard. the criteria for comparison and selecting the best variant are described in the previous section and their values are shown in table 1. table 1 the values of the criteria for the observed variants variants c1 c2 c3 c4 c5 c6 c7 subotica 2 429 2 6 441268 10,7 7,00 novi sad 2 608 1 10 890819 15,9 4,25 zrenjanin 1 416 1 2 386899 14,1 8,00 pančevo 2 384 0 9 1592715 20,9 3,75 ruma 3 411 1 1 1102168 18,3 8,00 požarevac 2 405 1 8 3154202 11,0 6,00 zaječar 1 316 0 7 1508932 15,5 7,50 lapovo 5 322 1 5 946831 19,0 5,50 niš 6 348 2 10 701979 24,7 3,25 kraljevo 1 245 2 4 765523 21,6 6,00 užice 1 369 1 3 1051473 15,0 4,75 according to table 1, each of the above criteria needs to be maximized, except for criterion 4 (level of transport and logistic competitiveness) and criterion 6 (unemployment rate, which is a logical conclusion because a lower unemployment rate is more favorable for the development of each region). data about transported goods by railway and number of freight cars (c5) are obtained thanks to the statistics from sector for freight transport „serbian railways“ and nowadays „serbia cargo“. criterion 1, availability of transport infrastructure, is covered by the data from the statistical office of the republic of serbia and working timetable which we use for calculation the number of railway lines. data from the statistical office of the republic of serbia are used for the following criteria: economic development (c2), investment attractiveness (c3) and unemployment rate (c6). yearly statistic handbook from the statistical office of the republic of serbia and statistics of local government are used for defining criterion 7, environmentfriendliness. 3.1. criteria weighting one of the main problems in multi criteria problems belong to criteria (vuković, 2014). taking into account that the weight of criteria can significantly affect the decision-making process, special attention must be paid to the criteria weighting, which, unfortunately, is not always present in problem-solving. for that reason we use two methods, the delphi and the entropy. 3.1.1. delphi method weights of criteria are defined through interviews with experts in the field of railway transport. the final values of weight coefficients, based on experts’ answers and using the delphi method are given in table 2. weight criteria are calculated through three iterations. mean values, standard deviation and coefficient of variation for each criterion are made, and the obtained average value of the coefficient of variation is 12,81%. in the next section, models for milosavljević et al./decis. mak. appl. manag. eng. 1 (2) (2018) 1-15 8 location railroad container terminal using topsis, electre and mabac methods are shown. table 2 weight of criteria by the delphi method criteria c1 c2 (thou. rsd) c3 c4 c5 (t) c6 (%) c7 normalized weights of criteria 0,27 0,13 0,10 0,12 0,23 0,08 0,07 3.1.2. entropy method determination of the objective criteria weights according to the entropy method is based on the measurement of uncertain information contained in the decision matrix. it directly generates a set of weights for a given criteria based on mutual contrast of individual criteria values of variants for each criteria and then for all the criteria at the same time (vuković, 2014). determination of objective criteria weights wj according to the entropy method is carried out in three steps (dimitrijević, 2016). step one involves the normalization of criteria values of variants xij from decision matrix mxn ij xx  : ji x x p m i ij ij ij ,, 1    , (1) entropy ej of all variants is calculated as: jppe m i ijijj   1 ,ln , (2) a constant ε, ε=1/ln m, is used to guarantee that 0≤ej≤1. the degree of divergence dj is calculated as: njed jj ,...,1,1  , (3) since the value of dj is a specific measure of the intensity of a criteria contrast cj, the final relative weight of the criteria, in the third step of the method, can be obtained by simple additive normalization: j d d w n i j j j    , 1 , (4) final values of weight coefficients, based on entropy method are given in table 3. table 3 weight of criteria by the entropy method criteria c1 c2 c3 c4 c5 c6 c7 ej 0,915 0,990 0,977 0,938 0,928 0,987 0,984 dj 0,085 0,010 0,023 0,062 0,072 0,013 0,016 selection of the railroad container terminal in serbia based on multi criteria decision-making… 9 wj 0,301 0,036 0,083 0,220 0,256 0,046 0,058 3.2. application of the topsis method topsis method is the one which compares variants based on their distance from a positive and negative ideal solution (hwang & yoon, 1981). the method is characterized by calculation of the weighted normalized decision matrix and formulation of the positive and negative ideal solution. also, this method is based on the concept that the chosen variant should have the shortest distance from the positive ideal solution and the longest distance from the negative ideal solution (čičak, 2003). weighted criterion matrix is shown in table 4. table 4 weighted criterion matrix with the delphi method variants c1 c2 c3 c4 c5 c6 c7 di+ dici subotica 0,05692 0,04243 0,04714 0,02843 0,02258 0,03842 0,02448 0,18392 0,07634 0,29332 novi sad 0,05692 0,06013 0,02357 0,00000 0,04559 0,02415 0,01486 0,17721 0,06257 0,26095 zrenjanin 0,02846 0,04114 0,02357 0,05687 0,01980 0,02909 0,02797 0,20338 0,07209 0,26171 pančevo 0,05692 0,03798 0,00000 0,00711 0,08151 0,01043 0,01311 0,16217 0,07050 0,30300 ruma 0,08538 0,04065 0,02357 0,06397 0,05641 0,01756 0,02797 0,14032 0,10041 0,41711 požarevac 0,05692 0,04005 0,02357 0,01422 0,16143 0,03760 0,02098 0,12823 0,15291 0,54389 zaječar 0,02846 0,03125 0,00000 0,02132 0,07723 0,02525 0,02623 0,17998 0,06826 0,27499 lapovo 0,14230 0,03184 0,02357 0,03554 0,04846 0,01564 0,01923 0,12780 0,12635 0,49716 niš 0,17076 0,03442 0,04714 0,00000 0,03593 0,00000 0,01136 0,14919 0,15112 0,50321 kraljevo 0,02846 0,02423 0,04714 0,04265 0,03918 0,00851 0,02098 0,19463 0,06769 0,25803 užice 0,02846 0,03649 0,02357 0,04976 0,05381 0,02662 0,01661 0,18280 0,07124 0,28042 weights 0,27000 0,13000 0,10000 0,12000 0,23000 0,08000 0,07000 ideal 0,17076 0,06013 0,04714 0,06397 0,16143 0,03842 0,02797 basal 0,02846 0,02423 0,00000 0,00000 0,01980 0,00000 0,01136 3.3. application of the electre i method evaluation matrix for the electre method is the same as in the case with the topsis method. the only difference is in the steps leading to the final solution. in this method, the variants are compared with each other as a couple; dominant and weak (or dominant and recessive) variants are identified and then weak and defeated alternatives are removed. in the electre method, it is also necessary to define the concordance and discordance index which can be defined as the average values of all values ckl and dkl calculated according to the following equations (5) and (6) (dimitrijević, 2016).   lk mm c c m k m s kl       , 1 1 1 , (5)   lk mm d d m k m s kl       , 1 1 1 , (6) milosavljević et al./decis. mak. appl. manag. eng. 1 (2) (2018) 1-15 10 based on value of concordance index ckl which represents domination of variant vk relative to vl based on weight criteria, we calculate preference threshold value ( c ) and its value is 0,5596. index where variant vk is worse than variant vl shows another index discordance index dkl. in that case we calculate dispreference threshold value ( d ) and its value is 0,7364. 3.4. application of the mabac method the basic setting of the mabac method is reflected in the definition of the distance of the criterion function of each of the observed alternatives from the approximate border area (pamučar & ćirović, 2015). mathematical computation of this method is presented through six steps as follows (božanić & pamučar, 2016): step 1 creating initial decision matrix x. step 2 normalization of the elements of initial decision matrix x. step 3 calculation of weighted matrix elements v. step 4 border approximate area for each criterion is determined by expression: m m j iji vg /1 1            , (7) matrix of approximate border areas g in both variants is given in table 5. table 5 matrix of approximate border areas weight of criteria c1 c2 c3 c4 c5 c6 c7 g delphi method 0,3342 0,1782 0,1507 0,1698 0,2873 0,1217 0,1051 entropy method 0,3726 0,0494 0,1251 0,3113 0,3198 0,0700 0,0871 step 5 calculation of the matrix elements distance from the border approximate area q step 6 ranking variant calculation of the criteria function values by variants is obtained as the sum of the distances of the variants from the border approximate areas qi. summing up the elements of matrix q by rows gives the final values of the criteria function variants:    n j iji minjqs 1 ,...,2,1,,...,2,1, , (8) where n represents the number of criteria, and m represents the number of variants. 4. results based on the previously defined input parameters and weight criteria, the results of the considered methods show which of the given variants is the best for the container terminal location. selection of the railroad container terminal in serbia based on multi criteria decision-making… 11 4.1. results obtained by topsis method complete ranking of the variants using topsis method is shown in table 6. the best variant for micro location of the railroad container terminal in both the variants is variant v6 railway section požarevac. table 6 complete order of variants with the topsis method variant delphi method entropy method r.u.v. rank r.u.v. rank subotica 0,29332 6 0,26737 10 novi sad 0,26095 10 0,18773 11 zrenjanin 0,26171 9 0,32506 6 pančevo 0,30300 5 0,28655 8 ruma 0,41711 4 0,48188 3 požarevac 0,54389 1 0,81239 1 zaječar 0,27499 8 0,27463 9 lapovo 0,49716 3 0,50997 2 niš 0,50321 2 0,47136 4 kraljevo 0,25803 11 0,29766 7 užice 0,28042 7 0,33564 5 4.2. results obtained by the electre method using electre i method two variants are dominant and much better than the others. these variants are 5 and 6, railway sections ruma and požarevac. this method gave 40 preference relations of all the variants, and nine inefficient variants when using the delphi method for weight criteria, and 42 preference relations when using the entropy method. the final results are shown through aggregate dominance matrix in table 7, where the first number means variant one, delphi method and the second number means variant two, entropy method. table 7 aggregate dominance matrix v1 v2 v3 v4 v5 v6 v7 v8 v9 v10 v11 v1 0/0 0/0 1/1 0/0 0/0 0/0 0/0 0/0 0/0 0/0 0/0 v2 1/1 0/0 1/1 0/0 0/0 0/0 0/0 0/0 1/1 1/1 0/0 v3 0/0 0/0 0/0 0/0 0/0 0/0 0/0 0/0 0/0 0/0 0/0 v4 0/0 1/1 0/0 0/0 0/0 0/0 1/1 0/0 1/1 1/1 1/1 v5 1/1 1/1 1/1 0/0 0/0 0/0 0/0 1/0 1/1 1/1 1/1 v6 0/0 1/1 1/1 1/1 0/0 0/0 1/1 1/0 1/1 1/1 1/1 v7 0/0 0/1 0/0 0/0 0/0 0/0 0/0 0/0 0/1 1/1 1/1 v8 1/1 1/1 1/1 0/0 0/0 0/0 0/0 0/0 0/1 1/1 0/0 v9 1/1 0/0 1/1 0/0 0/0 0/0 0/0 0/0 0/0 0/0 0/0 v10 0/0 0/0 1/1 0/0 0/0 0/0 0/0 0/0 1/1 0/0 0/0 v11 0/0 1/1 1/1 0/0 0 0 0 1/1 1/1 1/1 0/0 milosavljević et al./decis. mak. appl. manag. eng. 1 (2) (2018) 1-15 12 4.3. results obtained by mabac method ranking of all variants using mabac method is shown in table 8. table 8 rank of the variants using mabac method variant delphi method entropy method si rank si rank subotica 0,0659 5 0,0208 5 novi sad -0,0062 7 -0,1098 11 zrenjanin 0,0014 6 0,0116 6 pančevo -0,1007 11 -0,1066 10 ruma 0,1564 2 0,2083 1 požarevac 0,1897 1 0,1658 3 zaječar -0,0732 9 -0,0689 9 lapovo 0,1254 3 0,1749 2 niš 0,0860 4 0,0881 4 kraljevo -0,0774 10 -0,0268 8 užice -0,0266 8 0,0014 7 4.4. comparison between methods based on the obtained results using the electre method, the best variants and only efficient variants in both the variants are v5 and v6 požarevac and ruma. by comparison the topsis and mabac method, in both variants, in three of four cases the best variant is v6. also, in all situations the first four variants are always the same, požarevac (v6), ruma (v5), lapovo (v8) and niš (v9). rank of variants is given in table 9. table 9 comparison of topsis and mabac method variant mabac topsis mabac topsis delphi method delphi method entropy method entropy method subotica 5 6 5 10 novi sad 7 10 11 11 zrenjanin 6 9 6 6 pančevo 11 5 10 8 ruma 2 4 1 3 požarevac 1 1 3 1 zaječar 9 8 9 9 lapovo 3 3 2 2 niš 4 2 4 4 kraljevo 10 11 8 7 užice 8 7 7 5 selection of the railroad container terminal in serbia based on multi criteria decision-making… 13 general conclusion is that the railroad container terminal should be first located in the area of the railway section požarevac, in the region of braničevo. the best region for location is požarevac. this variant is high in terms of its volume of transported goods and high investment attractiveness. the transportation infrastructure of this region represents an average level, while the unemployment rate is very low. a clear advantage of this region is great connectivity with other regions and the existence of main road and rail corridors. by looking at the complete range of variants, with all the methods, and variants of weighting criteria it can be concluded that those with a high volume of transport and accessibility of infrastructure can be potential locations. regions (railway sections) like kraljevo or zrenjanin should not be taken into further consideration because they would not justify terminal existence by any parameter. 5. conclusion a railroad container terminal location problem, like any other location problem, is a very complex task, which requires a detailed analysis of different segments and parameters. using multi criteria decision-making methods, the model presented in this paper was developed. the macro location of the terminal is defined, which represents the first phase of determining its potential location. the proposed methodology has a universal character and can be applied to different types of location models, both for the selection of the location of railroad terminals, as well as for other railway logistics location problems. a further model development is based on a more detailed analysis of all input parameters. in particular, it is necessary to analyze the flows of goods more closely, including the volume of transported goods from road or water transport. also, the analysis of transport infrastructure can be expanded, using water transport and its impact on potential locations. in addition, an analysis of environmental parameters as well as transport safety in each region can be approached in more detail. market analysis, investment attractiveness and other economic criteria are another direction in the development of the model. the model can be improved using more relevant data for weight criteria, using some other methods for its calculation. for a more detailed analysis, and comparison of the results, other methods such as electre iii/iv, saw and some newer ones can be applied. the next step in our research and development is the formulation and solving of the second phase of the observed problem, that is, micro location of the railroad container terminal. this approach requires an analysis of the micro plan, within the region, in order to find the most suitable field for the location of the railroad container terminal. references asakereh, a., soleymani, m., & sheikhdavoodi, m. (2017). a gis-based fuzzy-ahp method for the evaluation of solar farms locations: case study in khuzestan province, iran. solar energy, 155, 342-353. barysiene, j. (2012). a multi-criteria evaluation of container terminal technologies applying the copras-g method, transport 27 (4), 364-372. milosavljević et al./decis. mak. appl. manag. eng. 1 (2) (2018) 1-15 14 božanić, d., pamučar, d., & karović, s. (2016). primene metode mabac u podršci odlučivanju upotreba snaga u odbrambenoj operaciji. tehnika menadžment, 66(1), 129-136. božanić, d., pamučar, d., & karović, s. (2016). use of the fuzzy ahp mabac hybrid model in ranking potential locations for preparing laying-up positions. vojnotehnički glasnik, 64(3), 705-729. čičak, m. (2003). modeliranje u železničkom saobraćaju. saobraćajni fakultet. beograd. dimitrijević, b. (2016). višekriterijumsko odlučivanje. saobraćajni fakultet, beograd. gigović, lj., pamučar, d., božanić, d., & ljubojević. s. (2017). application of the gisdanp-mabac multi-criteria model for selecting the location of wind farms: a case study of vojvodina, serbia. renewable energy, 103, 501-521. hwang, c. l, & yoon, k. (1981). multiple attribute decision making: methods and applications. springer, new york. karović, s., pušara, m. (2010) kriterijumi za angažovanje snaga u operacijama. novi glasnik, (3-4), 37-58. keshavarz ghorabaee, m. et al. (2015). multi-criteria inventory classification using a new method of evaluation based on distance from average solution (edas). informatica, 26 (3), 435-451. keshavarz ghorabaee, m. et al. (2016). a new combinative distance-based assessment (codas) method for multi-criteria decision-making. economic computation and economic cybernetics studies and research, 50 (3), 25-44. klose, a., & drexl, a. (2005). facility location models for distribution system design. european journal of operational rsearch, 162 (1), 4-29. knjižice reda vožnje uz red vožnje za 2016/2017. godinu, sektor za prevoz robe „železnice srbije“, beograd, 2016. limbourg, s., & jourquin, b. (2009). optimal rail-road container terminal locations on the european network. transportation research part e, 45 (4), 551-563. maroi, a., mourad, a., & mohamed, n. o. (2017). electre i based relevance decision-makers feedback to the location selection of distribution centers. journal of advanced transportation, 1-10. miletić, s. (2007). metodologija izbora glavnih poštanskih centara i lokacije poštanskih centara. xxv simpozijum o novim tehnologijama u poštanskom i telekomunikacionom saobraćaju postel 2007, 201-212. milosavljevic, m., bursac, m., trickovic, g. (2017). the selection of the railroad container terminal in serbia based on multi criteria decision making methods. vi international symposium new horizons 2017 of transport and communications, 7, 534-543. mučibabić, s. (2003). odlučivanje u konfliktnim situacijama. vojna akademija, beograd. opštine i regioni u republici srbiji, republički zavod za statistiku, beograd, 2016. selection of the railroad container terminal in serbia based on multi criteria decision-making… 15 pamučar, d., ćirović, d. (2015). the selection of transport and handling resources in logistics centers using multi-attributive border approximation area comparison (mabac). expert systems with applications 42(6), 3016-3028. pamučar, d., vasin, lj., đorović, b., & lukovac, v. (2012). dizajniranje organizacione strukture upravnih organa logistike korišćenjem fuzzy pristupa. vojnotehnički glasnik, 60(3), 143-167. regionalni bruto domaći proizvod, regioni i oblasti republike srbije, republički zavod za statistiku, beograd, 2015. senvar, o., otay, i., & bolturk, e. (2016). hospital site selection via hesitant fuzzy topsis. ifac-papersonline, 49-12, 1140-1145. srđević, b., medeiros, y., & schaer, m. (2003). objective evaluation of performance criteria for a reservoir system. vodoprivreda, vloume 35 (3-4), 163-176. statistike za 2011, 2012, 2013, 2014, 2015 i 2016. godinu, sektor za plan i analizu „železnice srbije“, beograd, 2015. stević, ž., vesković, s., vasiljević, m., & tepić, g. (2015). the selection of logistics center location using ahp method. 2nd logistics international conference, 86-91. tadić, s., zečević, s., & krstić, m. (2014). a novel hybrid mcdm model based on fuzzy dematel, fuzzy anp and fuzzy vikor for city logistics concept selection. expert systems with applications, volume 41, 8112-8128. tomić, v., marinković, z., & marković d. (2014). application of topsis method in solving location problems, the case of western serbia. research & developement in heavy machinery, 20 (3), 97-104. uysal, h. t., & yavuz, k. (2014). selection of logistics centre location vie electre method: a case study in turkey. international journal of business and social science, 5 (9), 276-289. vujičić, m., papić, m., & blagojević, m. (2017). comparative analysis of objective techniques for criteria weighting in two mcdm methods on example of an air conditioner selection. tehnika menadžment, 67 (3), 422-429. vuković, d. (2014). višekriterijumski izbor podsistema jgtp od strane društvene zajednice studija slučaja „beovoz“. tehnika, volume 69, no. 1, 121-126. žak, jacek., & weglinski, s. (2014). the selection of the logistics center location based on mcdm/a methodology. transport research procedia, 3, 555-564. © 2018 by the authors. submitted for possible open access publication under the terms and conditions of the creative commons attribution (cc by) license (http://creativecommons.org/licenses/by/4.0/). plane thermoelastic waves in infinite half-space caused decision making: applications in management and engineering vol. 4, issue 1, 2021, pp. 33-50. issn: 2560-6018 eissn: 2620-0104 doi: https://doi.org/10.31181/dmame2104033d * corresponding author. e-mail addresses: rishidwivedi12@yahoo.co.in (r. dwivedi), kprasad.prod@nitjsr.ac.in (k. prasad), nabankur2009@gmail.com (n. mandal), shwetasinghaka12@gmail.com (s. singh), mayank1293vardhan@gmail.com (m. vardhan), dragan.pamucar@va.mod.gov.rs (d. pamucar). performance evaluation of an insurance company using an integrated balanced scorecard (bsc) and best-worst method (bwm) rishi dwivedi 1, kanika prasad 2*, nabankur mandal 3, shweta singh 1, mayank vardhan 1 and dragan pamucar 4 1 department of finance, xavier institute of social service, ranchi, india 2 department of production and industrial engineering, national institute of technology, jamshedpur, india 3 department of mechanical engineering, mckv institute of engineering, west bengal, india 4 department of logistics, military academy, university of defence in belgrade, belgrade, serbia received: 19 october 2020; accepted: 5 december 2020; available online: 13 december 2020. original scientific paper abstract: recent economy and financial business environment is undergoing a quick and accelerating revolution and paradigm shift, resulting in growing uncertainty and complexity. therefore, the need for an all-inclusive and farreaching performance measurement model is universally felt as it can provide management-oriented information and act as a supporting tool in developing, inspecting, and interpreting policy-making strategies of an enterprise to achieve competitive advantages. hence, this paper proposes an application of the balanced scorecard (bsc) model in an insurance organization for coordinating and regulating its corporate vision, mission, and strategy with organizational performance through the interrelation of different layers of business perspectives. in the next stage, a framework to unify both bsc and best-worst method (bwm) models is implemented for the very first time in the insurance domain to assess its performance over two-time periods. the integrated bsc-bwm model can help managers and decision-makers to figure out and interpret competing strength of the said enterprise and consecutively expedite inefficient and compelling decision making. nevertheless, this integrated model is embraced and selected for a certain categorical business and there is enough future scope for its application to distinct industries. mailto:rishidwivedi12@yahoo.co.in mailto:kprasad.prod@nitjsr.ac.in mailto:nabankur2009@gmail.com mailto:shwetasinghaka12@gmail.com mailto:mayank1293vardhan@gmail.com mailto:dragan.pamucar@va.mod.gov.rs r. dwivedi et al./decis. mak. appl. manag. eng. 4 (1) (2021) 33-50 34 key words: service sector, insurance industry, bsc model, best-worst method, bwm, performance evaluation. 1. introduction insurance is a contract or an agreement between an individual and a company, where the company pledges to pay an amount of sum assured to its customers against the insurance based on its terms and policies. the most significant reason for having insurance is that it provides security in many terms, to name a few: it generates financial resources, promotes economic growth, keeps commerce moving, ensures business and family stability, and encourages savings thereby securing future goals. a well-designed insurance plan helps in managing risks. in today's competitive world every company in the insurance sector tries hard to satisfy its customers. every day the companies are introducing new plans and policies to attract new customers and retain their initial customer base. the insurance sector plays a vital role in the development of the economy of any country. it is the only sector that garners long terms savings and generates funds for the development of the capital market and infrastructure, and hence providing stability to the growth of the economy. besides, insurance is a vital component in carrying out smooth operational processes of national economies throughout the world. in developed countries, such as germany, england, switzerland, france, etc. insurance has become an important component of the economy as it contributes majorly to the global market. the indian insurance sector has faced many stumbling blocks to attain its current position. liberalization, privatization, and globalization (lpg) reforms in the year 1991 have played an important part in the development of this sector in india. based on a report published by india brand equity foundation in 2019, the overall insurance industry in india is expected to reach us$ 280 billion by 2020. the life insurance industry in the country is expected to grow by 12 % to 15 % annually for the next three to five years. hence, the insurance sector is one of the booming sectors of the indian economy. traditionally, the performance evaluation of an insurance organization’s progress is carried out based on financial and customer factors only, which are mostly past indicators of success. the future drivers of progress for the enterprise are generally encompassed in the learning and growth and internal business perspectives of the balanced scorecard (bsc) model and are not taken into consideration in the traditional models of performance evaluation. bsc model considers both lagging and leading factors. the leading factors are also termed as futuristic parameters. they include the internal business perspective and the learning and growth perspective. the bsc model works with a holistic and integrated view of the business. it is a present-day performance measurement technique designed to overcome the drawback of traditional performance measurement systems. it is a performance measurement tool that consists of a set of measures that facilitates the enterprise's view of its overall performance. however, with time the bsc model has also been used as a strategic management tool as it can identify the key performance indicators (kpis) of a company. in this paper, a bsc model is proposed for an insurance company operating in india while considering multiple factors such as profit after tax (pat), operating profit ratio, asset under management (aum), number of products, market share, etc. impacting the performance of the considered organization. next, the best-worst method (bwm) is employed in this paper to solve the multi-criteria decision-making (mcdm) problem. according to the bwm technique, the best or the most important and the worst or the least important business performance evaluation criteria of an performance evaluation of an insurance company using an integrated bsc and bwm 35 insurance company are identified first by the decision-makers. further, pair-wise comparisons are carried out for best to others identified criteria, and others chosen criteria to worst criterion. a maxi-min problem is then formulated and solved to determine the optimal weights of the different business performance parameters. a consistency ratio is estimated in the next step of bwm methodology to check and verify the reliability of the comparisons (vesković et al., 2020). furthermore, an integrated bsc-bwm structure is also designed in this paper to evaluate and appraise the progress of selected insurance enterprises over two time periods concerning key performance measures of the proposed bsc model to demonstrate the efficiency and effectiveness of management strategies applied. the results derived from the implementation of the above models would not only facilitate business performance measurement of the organization during a time period in quantitative terms but would also help the decision-makers to recognize what should be carried out and measured in an organization to reinforce and boost its productivity, performance, and progress. the results obtained from the application of the above model also contribute to the long term sustenance of the organization. 2. review of literature rezaei (2015) proposed a new technique bwm to solve multi-criteria decisionmaking problems. rezaei et al. (2016) developed a supplier selection model for food industry using the bwm tool. ahmadi et al. (2017) employed bwm in manufacturing companies to analyze the social sustainability of their supply chains. gupta (2018) applied hybrid bwm to identify the service quality parameters in the airline industry and then prioritized them based on customers’ needs. rezaei et al. (2018) developed a weighted logistics performance index based on the bwm method. salimi and rezaei (2018) proposed a multi-criteria decision-making method called bwm to calculate the weightage of research and development measures in small and medium-sized enterprises (smes). zhao et al. (2018) and kushwaha et al. (2020) implemented a hybrid framework on the basis of the bwm technique to assess the comprehensive benefit of eco-industrial parks in terms of circular economy and sustainability. pamucar (2020) employed bwm for determining criteria weights in supply chain management. brunelli and rezaei (2019) examined the bwm methodology for the mcdm problem from a more mathematical perspective. kheybari et al. (2019) developed a bwm model for bioethanol facility location selection in provinces of iran. zolfani and chatterjee (2019) not only investigated the weighting of important and related criteria for sustainable design but also evaluated the similarities and differences between the step-wise weight assessment ratio analysis (swara) and bwm techniques. chen et al. (2011) proposed an effective performance evaluation model based on the bsc technique which helped decision-makers to understand apt actions and achieve a competitive advantage. mendes et al. (2012) applied the bsc model in the urban hygiene and solid waste division of the loulé municipality in the south of portugal. pesic and dahlgaard (2013) suggested that there were strong positive correlations between the bsc perspectives and european foundation for quality management (efqm) excellence model criteria. hakkak and ghodsi (2014) concluded that the bsc model helped organizations in achieving sustainable competitive advantage. staš et al. (2015) designed a conceptual framework for creating the green transport (gt) and bsc models from the viewpoint of industrial companies and supply chains using an appropriate mcdm method. dwivedi and chakraborty (2016) developed a bsc model for an indian thermal power plant to help the organization to r. dwivedi et al./decis. mak. appl. manag. eng. 4 (1) (2021) 33-50 36 make strategic and tactical decisions. anjomshoae et al. (2017) proposed a dynamic bsc model in the humanitarian supply chain. dwivedi et al. (2018) designed a technique based on bsc for an indian seed manufacturing organization to know about its performance so that the managers can align a company’s operation with the existing business environment. abdelghany and abdel-monem (2019) applied the bsc model that provided the utilities' managers with a fast but comprehensive view of the utilities’ performance. mohammadi et al. (2019) developed a feasibility test model for creative experiences based on the bsc technique through qualitative content analysis. it can be comprehended from the ongoing literature review that earlier researchers have individually developed tools either by applying bsc or bwm for different real-world scenarios. the previous research works by past researchers have not consolidated bsc and bwm together. thus, by applying the two methodologies together, this research work focuses on doing the performance evaluation of an insurance organization in india while understanding the leading and lagging parameters that contribute to the success and failure of the selected organization. consequently, this integrated bsc and bwm model would fill the present gap for the real-time practical implementation of combined bsc as a bwm tool. this model would provide the latest perspective and motion for addressing real-world business performances and their assessment. 3. bsc model traditional performance measurement systems study and review the progress of an organization basically on the basis of a short-term financial objective. those are no longer relevant to conquer the challenges faced by the organizations in recent times. furthermore, with dynamic and growing business environment organizations have to make sure that their strategies are transformed into subsequent actions through a more meticulous and precise consideration of the objectives of related stakeholders. bsc model is often suggested and approved as an inclusive management tool linking critical strategic and short-term action planning (kaplan, 1994; badi et al., 2019). this technique is developed in such a way that it cancels the most typical and trivial mistake of the existing traditional systems of performance management, i.e. describing and reporting only on the basis of financial data. in today’s cut-throat aggressive and competitive market, it is even more critical to attaining a balance between financial and non-financial data in management reporting and recording. therefore, bsc is developed as a modern performance evaluation procedure to overcome the defect of the formerly adopted performance measurement systems through introducing four perspectives, i.e. financial perspective, customer perspective, internal perspective, and learning and growth perspective on which development and pace of an organization would be assessed. financial and customer measures take into account the past performance of the organization and they are termed as lagging indicators. on the other hand, internal business process, and learning and growth perspective are leading indicators. thus, a bsc model provides a holistic and integrated view of the business concerning four perspectives to the management. all the kpis that are identified for the bsc model under each perspective must fit the sequence of cause and effect relationships within them. 3.1. designing of the bsc model for an insurance sector organization developing a suitable bsc model for an enterprise needs a subtle evaluation of the organization’s foundations, core values, beliefs, ideas, opportunities, financial performance evaluation of an insurance company using an integrated bsc and bwm 37 position, short-term and long-term goals, and operating business. the confidentiality of the studied organization is maintained here and hence, the name of the insurance enterprise is not being revealed and hereafter, it is referred to as abc limited. it is an organization that is growing at a rapid pace for a strong presence in the domestic insurance market. abc limited has exclusive, remarkable, and innovative insurance products that help it to compete in the market. all the data required for the development of a performance measurement tool based on the bsc model pertains to the financial years 2016-2017 and 2017-2018. here, a focused group is selected constituting of subject experts to develop a bsc model for abc limited, while keeping a balanced representation of kpis from each perspective. figure 1 displays the developed bsc model for abc limited. it can be observed that the developed bsc model identified 20 business performance indicators that provide the management with a concise summary of the kpis of abc limited. these performance measures also assist in appropriately coordinating and regulating the business processes of abc limited with its overall policy. each identified performance parameter signifies a particular goal and competence of abc limited for all the four perspectives of the developed bsc model. for example, the key performance measure indicator of pat suggests how much the organization really earns after paying interest and tax and how much it can utilize for its day to day business activities, whereas the operating profit ratio explains the profit a company generates after paying for variable costs of production, such as wages, raw materials, etc. the earning per share (eps) is a critical and essential financial measure, which illustrates and reveals the profitability of a company. it is of utmost relevance and vital for people and investors trading in the stock market. eps of a company is directly related to its profitability. on the other hand, gross written premium (gwp) is the total revenue from a contract expected to be received by an insurance company before deductions for reinsurance and ceding commissions. moreover, average claim settlement time (in days), number of insurance products, number of branches, market share, and number of agents reveal the relationship of abc limited with its customers, which acts as a stimulant for its future growth. further, the number of policies issued, aum, number of employees, the amount spent on corporate social responsibility (csr) activities, and numbers of transactions managed (in lakhs) are the internal performance measures that show the operational competence of abc limited to deliver its products and provide services consistently. next, net promoter score (nps), training expenses, transactions managed and policies issued within the turnaround time (tat), management expenses, and employees' remuneration and welfare benefits represent the learning and growth aspect in the organization, which are of predominant importance for its long-term success while operating in a competitive insurance sector. thus, the developed bsc model can expedite an effective monitoring of the organization's overall progress. this model successively helps the decision-makers in future strategy formulation. r. dwivedi et al./decis. mak. appl. manag. eng. 4 (1) (2021) 33-50 38 figure 1: developed bsc model for abc limited 4. bwm technique bwm is a multi-criteria decision-making (mcdm) tool developed to overcome the drawbacks of previously existent methods, like the analytical hierarchy process (ahp) in 2015 (rezaei, 2015). bwm can be used in various decision-making fields, such as health, information technology, engineering, business, economics, and agriculture. in fundamental, wherever the objective is to rank and select a preference among a set of options, this method can be used. the pertinent features of bwm as compared to most current mcdm methods are that it requires fewer comparisons of data and leads to more dependable, steady, logical, and rational comparisons, which implies that the bwm model produces more decisive and stable results (bozanic et al., 2020). the bwm technique is employed to examine a set of alternatives with respect to some chosen criteria. it is based on a systematic pairwise comparison of the decision criteria, i.e. after identifying the decision criteria, two criteria are selected as the best decision criterion and the worst decision criterion by the decision-makers or experts. the best criterion is the one that has the most important and crucial role in making the decision, while the worst criterion has the opposite significance (pamucar & ecer, 2020). the decision-makers then give their preferences of the best decision criterion over all the others and also their preferences of all the other criteria over the worst decision criterion using a number from a predefined scale of 1 to 9. these two sets of pairwise comparisons and correlation are used as input for an optimization problem, the optimal results of which are the weights of the criteria. performance evaluation of an insurance company using an integrated bsc and bwm 39 5. development of a business performance measurement tool for an insurance sector organization employing bsc and bwm tools the performance evaluation of an insurance company is dependent upon various factors. here, the important drivers impacting the performance of selected insurance organization is already identified as the 20 kpis of the bsc model. next, in order to determine the relative importance of selected kpis, it is worthwhile to employ an mcdm method. several mcdm methods have been applied in earlier studies for different selection problems. in this study, bwm is implemented for the evaluation of identified kpis for the chosen insurance organization. this methodology has been successfully applied in several distinct real-world problems. moreover, compared to other similar existing mcdm methods, bwm needs less pairwise comparison of data and the results achieved by it are more consistent. 5.1. steps in implementation of bwm technique the steps followed in bwm are as follows: step 1. identify a set of evaluation criteria. in this step, a set of criteria {c1, c2, c3,…..,cn} is chosen for making a decision. step 2. the best criterion (most imperative or most important or most significant) and the worst criterion (least imperative or least important or least influential) are determined based on the decision-maker(s) or expert(s) opinion. step 3. the preference of the best decision criterion over all the other decision criteria is determined based on a score between 1 and 9, where a score of 1 means equal preference between the best criterion and another criterion and a score of 9 means the extreme preference of the best criterion over the other criterion. the result of this step is the vector of best-to-others (bo) and is represented by vector ab: ab = (ab1, ab2, ab3,….., abn), where abj indicates the preference of the best criterion b over criterion j, and it can be deduced that abb = 1. step 4. the preference of all the other decision criteria over the worst criterion is determined based on a score between 1 and 9. the result of this step is the vector of others-to-worst (ow) and is indicated by vector aw: aw = (a1w, a2w, a3w,……………., anw)t, where ajw shows the preference of the criterion j over the worst criterion w. it also can be deduced that aww = 1. step 5. the optimal weights (w1*, w2*, w3*,………, wn*) are calculated. the optimal weight of the criteria satisfies the following requirements: for each pair of wb/wj and wj/ww, the ideal situation is where wb/wj = abj and wj/ww = ajw. to satisfy these conditions for all j, a solution should be found where the maximum absolute differences |wb/wj – abj| | 𝑤𝐵 𝑤𝑗 − 𝑎𝐵𝑗| | 𝑤𝐵 𝑤𝑗 − 𝑎𝐵𝑗| and |wj/ww ajw| for all j is minimized. considering the non-negativity and sum condition for the weights, the following problem is resulted: r. dwivedi et al./decis. mak. appl. manag. eng. 4 (1) (2021) 33-50 40 , / – , , / – , 1 0, b j bj j w jw j j minx subject to w w a x for all j w w a x for all j w w for all j      (1) after solving problem equation (1), the optimal weights (w1*, w2*, w3*, …..wn *) and ξ* are obtained. ξ* can be seen as a direct indicator of the comparison system’s consistency. the closer the value of ξ* is to zero, the higher the consistency, and, therefore, the more reliable and steadier the comparisons become. 5.2. application of the bwm in insurance organization through the literature review and expert's view, 20 kpis impacting the performance of the selected insurance enterprise under four perspectives of the bsc model is already identified in the previous section. here in this research work, each perspective of the bsc model is treated as the main business performance criteria. next, under all main business performance parameter for the chosen organization, five different kpis are considered as sub-business performance criteria as shown in table 1. table 1. performance criteria of abc limited main business performance criteria sub business performance criteria customer perspective market share (%) average claims settlement time (in days) number of branches number of products number of agents financial perspective pat (in rs 000) operating profit ratio (%) earnings per share (eps) gwp (in rs 000) claims paid (in rs 000) internal business perspective number of employees number of policies issued aum (in rs crore) amount spent on csr activities (in rs) number of transactions managed (in rs lakhs) learning and growth perspective nps training expenses (in rs 000) transactions managed and policies issued within tat (%) management expenses (in rs 000) employees' remuneration and welfare benefits (in rs 000) performance evaluation of an insurance company using an integrated bsc and bwm 41 after the four main business parameters and five sub-business performance criteria under each of those main parameters are listed, the next step is to estimate the relative weights of all sub-business performance parameters. this is carried out by first finding the global weights of four main business performance parameters. after that, the local weights of sub-business performance criteria under each main business parameter are computed. final weights of each sub-business performance criteria are determined by utilizing equation (2). final weight of sub business performance criteria local weight of sub performance criteria global weight of main performance parameter   (2) after carefully evaluating four main business performance parameters with respect to the identified organization’s mission and operational constraints, the financial perspective is identified as the best criteria whereas the learning and growth perspective is selected as the worst one for abc limited. the experts specified the preference of best criterion with respect to selected other main business performance criteria as shown in table 2. table 3 displays the other main business performance criteria’s preference over the least important criterion. table 2. preference of best criterion with respect to selected other main business performance criteria best to others financial perspective customer perspective internal business perspective learning and growth perspective most important : financial perspective 1 2 4 5 table 3. other main business performance criteria’s preference over the least important criterion others to the worst least important: learning and growth financial perspective 5 customer perspective 4 internal business perspective 2 learning and growth perspective 1 next, the optimal global weights of the main business performance criteria for abc limited are calculated through solving equation (1) while implementing the steps of the bwm technique as discussed in the earlier section. table 4 shows the estimated global weights for all main business performance criteria with the consistency ratio (zolfani et al., 2020). r. dwivedi et al./decis. mak. appl. manag. eng. 4 (1) (2021) 33-50 42 table 4. global weights for main business performance criteria main business performance criteria global weight customer perspective 0.2796 financial perspective 0.4946 internal business perspective 0.1398 learning and growth perspective 0.0860 consistency ratio, ξ* 0.0645 it can be observed from table 4 that the financial perspective has the highest weight of 0.4946. the financial perspective’s influence is the most critical and essential criterion when abc limited attempts to accomplish its mission and objectives. this is followed by the customer perspective and internal business perspective with weights of 0.2796 and 0.1398 respectively. the learning and growth perspective has the lowest criterion weight of 0.0860 showcasing its minimal relevance and importance in the comprehensive business growth of abc limited. the consistency ratio (ξ*) is close to zero, i.e. 0.0645, which shows the high reliability and authenticity of the comparisons. furthermore, the sub-business performance criteria under the main business performance parameter of a financial perspective are examined to compute their local weights. here, pat is identified as the best sub-criterion whereas claims paid are selected as the worst one for abc limited. the best sub-criterion to other related subcriteria comparisons and others sub-criteria to worst sub-criterion comparisons of the financial perspective of abc limited is shown in table 5 and table 6. table 5. preference of best sub-criterion of financial perspective with respect to other sub-business performance criteria of financial perspective best to others pat operating profit ratio eps gwp claims paid most important : profit after tax 1 2 4 5 9 table 6. other sub-business performance criteria’s preference over the least important criterion of financial perspective others to the worst least important: claims paid pat 9 operating profit ratio 8 eps 6 gwp 4 claims paid 1 local weights for the five sub-business performance criteria under the main business performance parameter of financial perspective are estimated while following steps of bwm implementation as discussed in the earlier section and are displayed in table 7. performance evaluation of an insurance company using an integrated bsc and bwm 43 table 7. local weights for sub business performance criteria of financial perspective sub business performance criteria local weight pat 0.4457 operating profit ratio 0.2713 eps 0.1357 gwp 0.1085 claims paid 0.0388 consistency ratio, ξ* 0.0969 it can be interpreted from table 7 that pat is the most crucial sub-business performance criteria under the financial perspective with a local weight of 0.4457, followed by an operating profit ratio with the local weight of 0.2713. claims paid under financial perspective with a local weight of 0.0388, is having minimal significance. the consistency ratio, ξ* for this particular estimation is 0.0969 showing its high consistency and reliability. in a similar manner, the relative significance of sub-business performance criteria encompassed under the other three main business performance criteria, i.e. customer, learning and growth, and internal business perspectives are calculated as shown in table 8. table 8. local weights for sub business performance criteria of customer, learning, and growth, and internal business perspectives main business performance criteria sub business performance criteria local weight customer perspective market share 0.4416 average claims settlement time 0.2589 number of products 0.1726 number of agents 0.0863 number of branches 0.0406 consistency ratio, ξ* 0.0761 internal business perspective aum 0.4615 number of policies issued 0.2706 number of employees 0.1353 number of transactions managed 0.0902 amount spent on csr activities 0.0424 consistency ratio, ξ* 0.0796 learning and growth perspective nps 0.4560 transactions managed and policies issued within tat 0.2606 training expenses 0.1303 employees' remuneration and welfare benefits (rs 000) 0.1042 management expenses 0.0489 consistency ratio, ξ* 0.0651 r. dwivedi et al./decis. mak. appl. manag. eng. 4 (1) (2021) 33-50 44 after the global weights for all four main business performance criteria and local weights of 20 sub business performance parameters are estimated, final weights of those sub-business performance criteria need to be estimated using equation 2. for example, to estimate the final weight of pat, its local weight, i.e. 0.4457 is multiplied with the global weight of main business performance criteria of financial perspective under which it falls, i.e. 0.4946. thus, the final weight of the pat is calculated as 0.2205. the computed final weight of all sub-business performance criteria of abc limited is shown in table 9. table 9. the final weight of all sub-business performance criteria of abc limited main business performance criteria global weight (i) sub business performance criteria local weight (ii) final weight (i) * (ii) customer perspective 0.2796 market share 0.4416 0.1235 average claims settlement time 0.2589 0.0724 number of products 0.1726 0.0483 number of agents 0.0863 0.0241 number of branches 0.0406 0.0114 financial perspective 0.4946 pat 0.4457 0.2205 operating profit ratio 0.2713 0.1342 eps 0.1357 0.0671 gwp 0.1085 0.0537 claims paid 0.0388 0.0192 internal business perspective 0.1398 aum 0.4615 0.0645 number of policies issued 0.2706 0.0378 number of employees 0.1353 0.0189 number of transactions managed 0.0902 0.0126 amount spent on csr activities 0.0424 0.0059 learning and growth perspective 0.0860 nps 0.4560 0.0392 transactions managed and policies issued within tat 0.2606 0.0224 training expenses 0.1303 0.0112 employees' remuneration and welfare benefits 0.1042 0.0090 management expenses 0.0489 0.0042 5.3. designing a performance measure index for abc limited after the comparative importance of all sub-business performance, criteria are determined for abc limited, an index is developed to assess the enterprise's overall business performance. to compute the performance measure index and examine the progress of abc limited concerning the performance evaluation parameters of the bsc model, the related data for all the 20 recognized sub-business performance criteria for two different time periods are required. the first set of actual data related to the identified 20 performance measure indicators for the initial period is set as the baseline value, which in this case is derived from abc limited's annual report for the financial year 2016-17. next, the current period value illustrates the second data set for those selected 20 kpis and is drawn performance evaluation of an insurance company using an integrated bsc and bwm 45 from the annual report of abc limited for the financial year 2017-18. an initial value of 100 is assigned to all sub-business performance criteria and afterward, their weighted points are calculated using equation (3).   weighted point normalized performance measure weight npmw initial point   (3) afterward, the business performance measure index for the initial period is estimated by taking the summation of all the initial period weighted points computed for each sub-business performance criteria. table 10 shows a detailed computation of the performance measure index of abc limited for the initial period. table 10. the initial period performance index for abc limited sub business performance criteria baseline value npmw initial point an initial period weighted point pat 427973 0.2205 100 22.05 operating profit ratio 2 0.1342 100 13.42 market share 2 0.1235 100 12.35 average claims settlement time 46 0.0724 100 7.24 eps 0.97 0.0671 100 6.71 aum 2355 0.0645 100 6.45 gwp 18426964 0.0537 100 5.37 number of products 114 0.0483 100 4.83 nps 26 0.0392 100 3.92 number of policies issued 1373056 0.0378 100 3.78 number of agents 6000 0.0241 100 2.41 transactions managed and policies issued within tat 92 0.0224 100 2.24 claims paid 10131714 0.0192 100 1.92 number of employees 1702 0.0189 100 1.89 number of transactions managed 18 0.0126 100 1.26 number of branches 128 0.0114 100 1.14 training expenses 162191 0.0112 100 1.12 employees' remuneration and welfare benefits 1264666 0.0090 100 0.90 amount spent on csr activities 1109000 0.0059 100 0.59 management expenses 4646140 0.0042 100 0.42 performance index 100 to inspect and analyze the growth and advancement of the organization over the considered period, the performance measure index for the consequent period is also needed. so, the first step in calculating the performance measure index for the current year is to calculate the current period points by the following equations. equation (4) r. dwivedi et al./decis. mak. appl. manag. eng. 4 (1) (2021) 33-50 46 and equation (5) are employed for beneficial and non-beneficial performance parameters respectively.   / 100current period point current year value baseline value  (4)   200 – / 100 current period point current year value baseline value     (5) applying equation (6), the value of the current period weighted point for the individual sub-business performance parameter is now similarly estimated.   weighted point normalized performance measure weight npmw initial point   (6) next, the performance index for the current period is calculated by adding up all the current period weighted points for all the individual sub-business performance measures. the comprehensive estimation of the current period index for abc limited is shown in table 11. a judgment of the performance measure index values for the current period with those for the initial period, derived from table 10 and table 11 respectively, helps in assessing and understanding the comprehensive business performance of abc limited over the two different time periods and thus allowing the policymakers to understand the effectiveness of various applied policies. it can be noticed that abc limited has significantly improved its performance measure index to 129.98 for the current period in comparison to 100 for the initial period. this implies that the said enterprise is advancing in an appropriate direction to its organizational objectives, vision, and mission. furthermore, a comparison of the current period weighted point with that of initial period weighted point values for individual sub-business performance criteria suggests abc limited’s progression over two time periods with respect to that particular criteria. it can be observed that abc limited has significant enhancement and development in current period weighted point of pat, operating profit ratio, aum, number of products, nps, number of policies issued, number of agents, and number of transactions managed with respect to their corresponding values in initial period weighted point. the standard operating procedures of the organization to achieve this advancement in the said criteria should be maintained in the future also. besides, it can also be noticed that although the organization is progressing in the right direction, there is some sub-business performance parameter on which abc limited is not advancing in the correct preposition. the sub-business performance criteria on which abc limited is not performing well are market share, average claims settlement time, eps, transaction managed and policies issued within tat, training expenses, number of branches and employees’ remuneration and welfare benefits. these identified parameters are the area where the resources need to be deployed by the management of abc limited in order to convert them from nonperforming criteria to performing criteria in the future. this can help the organization in achieving long term sustenance. performance evaluation of an insurance company using an integrated bsc and bwm 47 table 11. performance index for current period for abc limited sub-business performance criteria current period value current period point current period weighted point pat 786281 183.72 40.51 operating profit ratio 3 150.00 20.13 market share 1.65 98.21 12.13 average claims settlement time 50 91.30 6.61 eps 0.57 58.76 3.94 aum 2992 127.05 8.20 gwp 19507884 105.87 5.68 number of products 130 114.04 5.50 nps 40.5 155.77 6.11 number of policies issued 2012574 146.58 5.54 number of agents 7500 125.00 3.02 transactions managed and policies issued within tat 90.4 97.94 2.20 claims paid 9221366 108.99 2.09 number of employees 1769 103.94 1.97 number of transactions managed 29 161.11 2.03 number of branches 127 99.22 1.13 training expenses 203982 74.23 0.83 employees' remuneration and welfare benefits 1351988 93.10 0.83 amount spent on csr activities 2104468 189.76 1.13 management expenses 4650523 99.91 0.42 performance index 129.98 6. conclusion the insurance sector is one leading sector for corporate houses for business operation. the proposed model is inclusive of both lagging factors and leading factors related to the business performance for an insurance sector organization. the integrated bsc and bwm technique is a comprehensive and broad model for realworld insurance sector application. through this study, it can be comprehended that a combined bsc-bwm model can be successfully applied to design a quantitative tool to measure the business performance of an organization and monitor the efficiency and effectiveness of the implemented strategies, which may drive the pathway for future policymaking. in this combined model both past (lagging) and futuristic (leading) parameters are taken into account with two different periods. this provided better analysis and scrutiny of business operating data to the management of an organization. the management can take corrective action to enhance the business potential if there is a significant deviation from the desired and expected business performance of the organization. there is enormous competition among insurance sector organizations to enhance their profit margin, market share, and customer base while minimizing operating r. dwivedi et al./decis. mak. appl. manag. eng. 4 (1) (2021) 33-50 48 expenses and developing and launching new insurance products that are widely accepted by customers. in this paper, a bsc model is first developed for an insurance sector enterprise in india to identify a relevant range of financial and non-financial parameters that supports effective and impressive business management. next, a business performance measurement tool combining bsc and bwm methods is developed and applied in the said organization to exhibit how it can be engaged to monitor the performance of the firm, which can be finally utilized as a driver for the organization's future growth. this unified bsc and bwm model can also be employed by other organizations of different sectors with minor adjustments. it requires time and attention while defining the kpis for the insurance sector which is a limitation of the proposed work. author contributions: each author has participated and contributed sufficiently to take public responsibility for appropriate portions of the content. funding: this research received no external funding. conflicts of interest: the authors declare no conflicts of interest. references abdelghany, m., & abdel-monem, m. (2019). balanced scorecard model for water utilities in egypt. water practice and technology, 14(1), 203-216. ahmadi, h. b., kusi-sarpong, s., & rezaei, j. (2017). assessing the social sustainability of supply chains using best worst method. resources, conservation and recycling, 126, 99-106. andjelkovic pesic, m., & dahlgaard, j. j. (2013). using the balanced scorecard and the european foundation for quality management excellence model as a combined roadmap for diagnosing and attaining excellence. total quality management & business excellence, 24(5-6), 652-663. anjomshoae, a., hassan, a., kunz, n., wong, k. y., & de leeuw, s. (2017). toward a dynamic balanced scorecard model for humanitarian relief organizations’ performance management. journal of humanitarian logistics and supply chain management, 7(2), 194-218. badi, i., shetwan, a., & hemeda, a. (2019). a grey-based assessment model to evaluate health-care waste treatment alternatives in libya. operational research in engineering sciences: theory and applications, 2(3), 92-106. bozanic, d., tešić, d., & milić, a. (2020). multicriteria decision making model with znumbers based on fucom and mabac model. decision making: applications in management and engineering, 3(2), 19-36. brunelli, m., & rezaei, j. (2019). a multiplicative best-worst method for multi-criteria decision making. operations research letters, 47(1), 12-15. chen, f. h., hsu, t. s., & tzeng, g. h. (2011). a balanced scorecard approach to establish a performance evaluation and relationship model for hot spring hotels based on a hybrid mcdm model combining dematel and anp. international journal of hospitality management, 30(4), 908-932. performance evaluation of an insurance company using an integrated bsc and bwm 49 dwivedi, r., & chakraborty, s. (2016). development of a strategic management tool in a thermal power plant using abc and bsc models. serbian journal of management, 11(1), 81-97. dwivedi, r., chakraborty, s., sinha, a. k., & singh, s. (2018, june). development of a performance measurement tool for an agricultural enterprise using bsc and qfd models. in iop conference series: materials science and engineering (vol. 377, no. 1, p. 012214). iop publishing. gupta, h. (2018). evaluating service quality of airline industry using hybrid best worst method and vikor. journal of air transport management, 68, 35-47. hakkak, m., & ghodsi, m. (2015). development of a sustainable competitive advantage model based on balanced scorecard. international journal of asian social science, 5(5), 298-308. kaplan, r. s. (1994). devising a balanced scorecard matched to business strategy. planning review, 22(5), 15-48. kheybari, s., kazemi, m., & rezaei, j. (2019). bioethanol facility location selection using best-worst method. applied energy, 242, 612-623. kushwaha, d. k., panchal, d., & sachdeva, a. (2020). risk analysis of cutting system under intuitionistic fuzzy environment. reports in mechanical engineering, 1(1), 162173. mendes, p., santos, a. c., perna, f., & teixeira, m. r. (2012). the balanced scorecard as an integrated model applied to the portuguese public service: a case study in the waste sector. journal of cleaner production, 24, 20-29. mohammadi, a., moharrer, m., & babakhanifard, m. s. (2019). the business model and balanced scorecard in creative tourism: the ultimate strategy boosters. current issues in tourism, 22(17), 2157-2182. pamucar, d. (2020). normalized weighted geometric dombi bonferoni mean operator with interval grey numbers: application in multicriteria decision making. reports in mechanical engineering, 1(1), 44-52. pamucar, d., & ecer, f. (2020). prioritizing the weights of the evaluation criteria under fuzziness: the fuzzy full consistency method – fucom-f. facta universitatis, series: mechanical engineering. 18(3), 419 437. rezaei, j. (2015). best-worst multi-criteria decision-making method. omega, 53, 4957. rezaei, j., nispeling, t., sarkis, j., & tavasszy, l. (2016). a supplier selection life cycle approach integrating traditional and environmental criteria using the best worst method. journal of cleaner production, 135, 577-588. rezaei, j., van roekel, w. s., & tavasszy, l. (2018). measuring the relative importance of the logistics performance index indicators using best worst method. transport policy, 68, 158-169. salimi, n., & rezaei, j. (2018). evaluating firms’ r&d performance using best worst method. evaluation and program planning, 66, 147-155. r. dwivedi et al./decis. mak. appl. manag. eng. 4 (1) (2021) 33-50 50 staš, d., lenort, r., wicher, p., & holman, d. (2015). green transport balanced scorecard model with analytic network process support. sustainability, 7(11), 1524315261. vesković, s., milinković, s., abramović, b., & ljubaj, i. (2020). determining criteria significance in selecting reach stackers by applying the fuzzy piprecia method. operational research in engineering sciences: theory and applications, 3(1), 72-88. zhao, h., guo, s., & zhao, h. (2018). comprehensive benefit evaluation of eco-industrial parks by employing the best-worst method based on circular economy and sustainability. environment, development and sustainability, 20(3), 1229-1253. zolfani, s. h., & chatterjee, p. (2019). comparative evaluation of sustainable design based on step-wise weight assessment ratio analysis (swara) and best worst method (bwm) methods: a perspective on household furnishing materials. symmetry, 11(1), 74. zolfani, s.f., yazdani, m., pamucar, d., & zarate, p. (2020). a vikor and topsis focused reanalysis of the madm methods based on logarithmic normalization. facta universitatis, series: mechanical engineering. 18(3), 341-355. plane thermoelastic waves in infinite half-space caused decision making: applications in management and engineering vol. 1, issue 2, 2018, pp. 34-50 issn: 2560-6018 eissn: 2620-0104 doi: https://doi.org/10.31181/dmame1802034v * corresponding author. e-mail addresses: veskos@sf.bg.ac.rs (s. vesković), zeljkostevic88@yahoo.com (ž. stević), gordan@uns.ac.rs (g. stojić), drmarkovasiljevic@gmail.com (m. vasiljević), s.milinkovic@sf.bg.ac.rs (s. milinković) evaluation of the railway management model by using a new integrated model delphi-swaramabac slavko vesković1, željko stević2*, gordan stojić3, marko vasiljević2, sanjin milinković1 1 university of belgrade, faculty of transport and traffic engineering, serbia 2 university of east sarajevo, faculty of transport and traffic engineering doboj, bosnia and herzegovina 3 university of novi sad, faculty of technical science, serbia received: 13 april 2018; accepted: 26 august 2018; available online: 26 august 2018. original scientific paper abstract the functioning of each traffic system depends to a great extent on the way the rail transport system operates. taking into account the aspect of market turbulence and the dependence on adequate delivery when it comes to freight transport and traffic in accordance with a yearly timetable in passenger traffic, transport policies are changing with time. therefore, this document is considering the railway management models on the territory of bosnia and herzegovina. for the purpose of evaluating these models, a new hybrid model has been applied, i.e. the model which includes a combination of the delphi, swara (step-wise weight assessment ratio analysis) and mabac (multi-attributive border approximation area comparison) methods. in the first phase of the study, the criteria ranking was determined based on 16 expert grades used in the delphi method. after that, a total of 14 decisionmakers determined the mutual criteria impact, which is a prerequisite for the application of the swara method used to determine the relative weight values of the criteria. the third phase involves the application of the mabac method for evaluating and determining the most suitable variant. in addition, a sensitivity analysis involving the application of the aras, waspas, saw and edas methods has been performed, thus verifying the previously obtained variant ranking. key words: railways, transport policy, delphi, swara, mabac. evaluation of the railway management model by using a new integrated model delphiswara-mabac 35 1 introduction although the railway has significant advantages which are reflected in a high level of safety, considerably less energy consumption per unit of transport and minimal impact on the environment, as well as the least impact on external transport costs comparing to other modes of transport, its participation in transport market has decreased significantly in the second half of the 20th century. to a large extent, it has been caused by historical, traditional and national influences on railway companies, and above all: a high level of government intervention in the business operations of national railway companies railway companies, through state control and intervention were used to meet political and social goals rather than to function in accordance with market principles, and, costs subsidizing and lack of incentives for change – a high proportion of passenger transport, which was unprofitable and politically supported, placed railway companies in the public service area, and they often transported passengers without an adequate compensation. in europe, all national railway administrations used to be state owned organizations which, for the sake of economic and social policy, were obliged to execute public passenger transport services. due to lower prices, the revenues did not cover actual costs, resulting in their inability to finance exploitation and infrastructure development. the lack of financial resources further led to economic weakening of the railway companies and their position on the market. national railway companies are integrated, i.e. they perform both functions of the infrastructure manager and operator. the regulatory framework is national with no competition in the form of foreign railways while there is no domestic market. due to non-profitability of the railway companies, there was a debt accumulation process in most european countries, especially in the late 1980s. the loss of railway competitiveness in the transport market in intermodal competition, a growing deficit and an increasing debt burden of the state-owned companies have triggered off reforms. in the eu member states and beyond, views and directives concerning the restructuring of the rail system have been adopted. prior reforms did not allow complete railway's liberalization and meeting the requirements of transport market, the expected positive operation of the railway system, the necessary level of rail services quality, satisfaction of the interests of the social community at the national, regional and local level. positive business results were partly achieved on the main railways (pan-european corridors), primarily in transit traffic. although the quality of services on railway system has improved, it is still far from the level required by transport market. defining the method of national railway companies restructuring, and thus the way of infrastructure management in europe, was mainly based on experts opinions, and it depended on the defined traffic policy, the country's level of development, and the readiness to accept changes (political, social and others). determination of the reforming method, or the most acceptable model of restructuring, is based on experiences, intuitions and subjective attitudes of individual institutions and experts. however, the countries have undertaken reforms aimed at easing the debt burden on national rail companies, reducing demands for high subsidies, mitigating and halting the fall of railways in market share comparing to other modes of transport. there was a need to create an efficient integrated railway system in the eu and to vesković et al./decis. mak. appl. manag. eng. 1 (2) (2018) 34-50 36 facilitate border crossing of goods within a single european market with the ultimate aim to: establish a railway transport market, develop competition in the railway sector, and, reduce state subsidies in the railway sector. the first task of railway restructuring is to transform the state organization into a business organization capable of carrying out transport operations both on the national and international transport market. in this process, the state has a role to create appropriate conditions for the development of a transport system that functions with the maximum application of market mechanisms and meets the transport needs of the society. in order to establish a harmonized market environment in which transporters functioning in different types of transport are affirmed on the basis of equal conditions of competition, it is necessary to calculate the total transport costs generated. the total costs of transport company include not only direct transport costs, infrastructure costs, traffic management and accident compensation, but also compensation for damage to the environment (cer, 2005). the actual situation is that in such conditions the railway has significant advantages over other modes of transport. in order to fully evaluate these facts, it is necessary to reform traditional railway companies and establish optimal models for their organization and functioning. this paper examines four different models of organization and structure of the railways of the republic of srpska (žrs), which are defined on the base of existing solutions for the reform of national rail companies in europe (predominantly in the european union member states). 2 literature review many studies in the domain of railway transport rely on the application of multicriteria decision-making methods. in (krmac & djordjević, 2017) the group analytical hierarchical process (ahp) was used to determine the key performance indicators for assessing intelligent transport systems. an integrated model consisting of the delphi, group analytical hierarchical process and promethee methods in (nassereddine & eskandari 2017) was applied in the field of public passenger transport, where, as a result, the metro is the most important passenger transport system. also, the integrated mcdm model (dematel, anp and vikor) was used to choose the transport mode in hualien (kuo & chen, 2015). aydin, (2017) commenced a three-year research in istanbul for measuring performances of the railway transit lines. for this purpose he used the topsis method. the performance evaluation of the railway zones in india (ranjan et al. 2016)) was conducted by combining the dematel and vikor methods, while in their research sang et al. (2015) used the fuzzy ahp method for selection and evaluation of railway freight third-partylogistics. leonardi (2016) applied a combination of fuzzy logics with multiplecriteria decision-making (ahp method) to plan a railway infrastructure, while in (santarremigia et al. 2018) the ahp was also applied in the safety area during the railway transport of dangerous materials. a combination of the bwm and saw methods was used in (stević et al. 2017a) to determine the importance of criteria in purchasing wagons in a logistics company. according to hashemkhani zolfani & bahrami (2014), the swara method is suitable for decision-making at a high level of decision-making and also instead of policy-making. its convenience in a decision-making process is reflected in the advantages it has in comparison to other methods for obtaining the weight values of evaluation of the railway management model by using a new integrated model delphiswara-mabac 37 criteria. these advantages are primarily seen in a significantly smaller number of comparisons in relation to other criteria, and the possibility to evaluate the opinions of experts on the significance of criteria in a process of determining their weights. over the few past years since this method came into existence, it has been used in a number of publications to determine weight values of the criteria. the swara was used to assess the relation between the floods and influencing parameters in (hong et al. 2017), while the anfis model is applied to flood spatial modeling and zonation, and it is used for the r&d project evaluation in (hashemkhani zolfani et al. 2015). using the swara method in (heidary dahooie et al. 2018), it is concluded that subject competency is the main criteria in it personnel selection. in (keshavarz ghorabaee et al. 2018), it is used to determine the significance of criteria in a process of evaluating construction equipment in sustainable conditions, while ruzgys et al. (2014) apply it to the evaluation of external wall insulation in residential buildings. it is successfully applied to risk assessment (valipour et al. 2017), for selection of a basic shape of the single-family residential house's plan (juodagalvienė et al. 2017), while karabašević et al. (2017) used the adapted swara with the delphi method for selection of personnel. the combination of the swara and waspas is used for solar power plant site selection in (vafaeipour et al. 2014), as well as in (ghorshi nezhad et al. 2015) where the combination of these two methods is applied in the nanotechnology industry. this combination is also integrated in (urošević et al. 2017) where it is used for the selection of personnel in tourism. the integration of the swara, fuzzy kano model and rov methods is proposed in (jain & singh, 2017) to solve supplier selection. the fuzzy swara is used to determine the significance of criteria, and the fuzzy copras for ranking and selecting sustainable 3prlps in the presence risk factors. the suggested model was applied to a case study from automotive industry (zarbakhshnia et al. 2018). a combination of the fuzzy swara and the fuzzy moora is used for sustainable third-party reverse logistic provider selection in plastic industry (mavi et al. 2017). the authors in (panahi et al. 2017) use the swara method for prospecting copper in the anarak region, central iran, while the authors in (ighravwe & oke, 2017) use it for sustenance of zero-loss on production lines from a cement plant. 3 methods 3.1 delphi method the delphi method does the study of and gives projections of uncertain or possible future situations for which we are unable to perform objective statistical legalities, in order to form a model, or apply a formal method. these phenomena are very difficult to quantify because they are mainly qualitative in their nature, i.e. not enough statistical data about them exist that could be used as the basis for our studies. the delphi method is one of the basic forecasting methods, the most famous and most widely used expert judgment method. methods of expert's assessments are representing significant improvement of the classical ways of obtaining the forecast by joint consultation of an expert's group for a given studied phenomenon. in other words, this is a methodologically organized use of the expert's knowledge to predict future states and phenomena. a typical group in one delphi session ranges from a few to thirty experts. each interviewed expert, participant in the method, relies on vesković et al./decis. mak. appl. manag. eng. 1 (2) (2018) 34-50 38 knowledge, experience and his / her own opinion. the goal of the delphi method is to exploit the collective, group thinking of experts about certain field. the goal is to reach a consensus on an event by group thinking. this is a method of indirect collective testing but with a return link. it consists of eight steps: 1: selection of the prognostic task, defining basic questions and fields for it; 2: selection of experts; 3: preparation of questionnaires; 4: delivery of questionnaires to experts; 5: collecting responses and their evaluating; 6: analysis and interpretation of responses; 7: re-exams; 8: interpretation of responses and setting up final forecast. the advantages of the delphi method • it covers the large number of respondents; • expert's statements are objective because they do not know the statements of others until the end of the circle; • it is possible to examine the opinion and attitude of an individual according to a task; • the method strengthens the sense of community and encourages thinking about the future of the organization. delphi method disadvantages: the success of the method depends exclusively on the participants in the expert panel; complicated implementation process; absence of the possibility to exactly identify the number of participants in the expert panel; long duration of research. according to the rules of the delphi method, the submitted forecasts of the first circle are statistically processed and sent to the experts again to make possible corrections if they consider other opinions. it is characteristic that most experts remain in their first-round prognosis. 3.2 swara method the swara (step-wise weight assessment ratio analysis) method is one of the methods for determining weight values that play an important role in a decisionmaking process. the method was developed by kersuliene et al. (2010) and, in their opinion, its basic characteristic is the possibility of assessing the opinion of experts on the significance of criteria in the process of determining their weights. after defining and forming a list of criteria involved in a decision-making process, the swara method consists of the following steps: step 1: criteria need to be sorted according to their significance. in this step, the experts perform the ranking of the defined criteria according to the significance they have; for example, the most significant is in the first place, the least significant is in the last place, while the criteria in-between have ranked significance. step 2: determine sj comparative importance of average value. starting from the second ranked criterion, it is necessary to determine their significance, that is, how much criterion cj is more important than criterion cj+1. evaluation of the railway management model by using a new integrated model delphiswara-mabac 39 step 3: calculate coefficient kj as follows: 1 1 1 1 j j j k s j      (1) step 4: determine recalculated weight qj as follows: 1 1 1 1 jj j j qq j k         (2) step 5: calculate the weight values of the criteria with the sum that is equal to one: 1 j j m j k q w q    (3) where wj represents the relative weight value of the criteria. 3.3 mabac method the mabac method (multi-attributive border approximation area comparison) is one of the recent methods. the mabac method was developed by dragan pamučar in the defense research center for defense logistics in belgrade and was first presented to the scientific public in 2015 (pamučar & ćirović, 2015). to date, it has found very wide application and modifications solving numerous problems in the field of multi-criteria decision-making. the basic setting of the mabac method is reflected in defining the distance of the criterion function of each observed alternative from the boundary approximation domain. in the following section, the procedure for implementing the mabac method consisting of 6 steps is shown: step 1: forming initial decision matrix ( )x . as a first step, m alternatives are evaluated by n criteria. alternatives are shown with vectors  1 2, ,...,i i i ina x x x , where ij x is the value of i-… alternative by j-… criteria ( 1, 2,..., ; 1, 2,..., )i m j n  . 1 2 1 11 12 1 2 21 22 2 1 2 ... ... ... ... ... ... ... ... n n n m m m mn c c c a x x x a x x x x a x x x             (4) step 2: normalization of elements of starting matrix (x). 1 2 1 11 12 1 2 21 22 2 1 2 ... ... ... ... ... ... ... ... n n n m m m mn c c c a t t t a t t t n a t t t             (5) the elements of normalized matrix (n) are determined using the expression: for criteria belonging to a "benefit" type (greater value of criteria is more desirable) vesković et al./decis. mak. appl. manag. eng. 1 (2) (2018) 34-50 40 ij i ij i i x x t x x       (6) for criteria belonging to a "cost" type (lower value of criteria is more desirable) ij i ij i i x x t x x       (7) where ij x , i x  and i x  are representing elements of the starting matrix of making decision (x), where i x  and i x  are defined as:  1 2max , ,...,i mx x x x   and representing maximal values of the observed criteria by alternatives.  1 2min , ,...,i mx x x x   and representing minimal values of the observed criteria by alternatives. step 3: calculation of the element of more difficult matrix (v). elements of more difficult matrix (v) are being calculated on the basis of expression (8) ij i ij i v w t w   (8) where ij t are representing the elements of normalized matrix (n), i w represents weighting coefficients of the criteria. by applying expression (8) we will get more difficult matrix v 11 12 1 1 11 1 2 12 2 1 21 22 2 1 21 1 2 22 2 2 1 2 1 1 1 2 2 2 ... ... ... ... ... ... ... ... ... ... ... ... ... n n n n n n n n m m mn m m n mn n v v v w t w w t w w t w v v v w t w w t w w t w v v v v w t w w t w w t w                                          where n represents the total number of the criteria, m represents the total number of the alternatives. step 4: determining the matrix of bordering approximative fields (g). bordering approximative field (gao) is being determined by expression (9) 1/ 1 m m i ij j g v          (9) where ij v are representing the elements of weighted matrix (v), m represents the total number of the alternatives. after calculating value i g the matrix of bordering approximative fields is being formed according to criteria g (10) in format 1n x (n represents the total number of the criteria by which the offered alternatives are being chosen).   1 2 1 2 ... ... n n c c c g g g g (10) step 5: the calculation of the distance matrix element is an alternative to boundary approximative area (q) evaluation of the railway management model by using a new integrated model delphiswara-mabac 41 11 12 1 21 22 2 1 2 ... ... ... ... ... ... n n m m mn q q q q q q q q q q             (11) distance of alternatives from boundary approximative area ( ) ij q is being determined as a difference of elements of heavier matrix (v) and values of bordering approximative areas (g).   11 12 1 21 22 2 1 2 1 2 ... ... ... ... ... ... ... n n n m m mn v v v v v v q v g g g g v v v                (12) 11 1 12 2 1 11 12 1 21 1 22 2 2 21 22 2 1 1 2 2 1 2 ... ... ... ... ... ... ... ... ... ... ... ... ... n n n n n n m m mn n m m mn v g v g v g q q q v g v g v g q q q q v g v g v g q q q                                 (13) where i g represents the bordering approximative areas for criterion i c , ij v represents elements of heavier matrix (v), n represents the number of the criteria, m represents the number of the alternatives. alternative i a may belong to a bordering approximative area (g), upper bordering approximative area ( )g  or lower bordering approximative area ( )g  , i.e.  ia g g g      . upper approximative area ( )g  represents the area in which ideal alternative (a+) is located, while lower approximative area ( )g  represents the area in which the anti-ideal alternative is located ( )a  (fig. 1). g  g  a  a  1 a 3 a 4 a 2 a 5 a 6 a 7 a g gornja aproksimativna oblast donja aproksimativna oblast granična aproksimativna oblast 0 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0 fig. 1 display of the upper, lower and bordering approximative areas (pamučar & ćirović, 2015) vesković et al./decis. mak. appl. manag. eng. 1 (2) (2018) 34-50 42 affiliation of alternative i a to approximative area (g, g+ or g-) is determined on the basis of expression (14) ij i i ij i ij i g if q g a g if q g g if q g          (14) in order for an alternative to be selected as the best from a given set, it is necessary for it to belong to the upper approximating field by as many criteria as possible ( )g  . if, for example, an alternative i a belongs to the upper approximative area by 5 criteria (out of 6 in total), and to the lower approximative area by one criterion, ( )g  that means that, by 5 criteria, this alternative is close to or equal with the ideal one, while by one criterion it is close to or equal to the anti-ideal one. if value 0 ij q  , i.e. ij q g   , then alternative i a is close or equal to the ideal alternative. value 0 ij q  , i.e. ij q g   , shows that alternative i a is close or equal to the anti/ideal alternative. step 6: alternatives ranking. calculation of values of the criteria functions by alternatives (15) is obtained as the sum of distance of the alternatives from bordering approximative fields ( ) i q . by summarizing the elements of the q matrix by rows, we obtain the final values of the criterion functions of alternatives (15) 1 , 1, 2,..., , 1, 2,..., n i ij j s q j n i m     (15) where n represents the number of the criteria, and m represents the number of the alternatives. 3 case study four variants of the management model for railway companies were considered: 1) variant 1 model of a single (independent) legal entity with a simple organizational structure and a high degree of centralization. fig. 2 variant 1 – model of unique (independent) legal subject evaluation of the railway management model by using a new integrated model delphiswara-mabac 43 2) variant 2 clear holding is a company exclusively dealing with management activities: establishment, financing and management of companies. this type of holding does not have any other special activities. clear holding does not deal with production or sale; neither does it perform any other business functions, even those that are common to companies daughters or members of the holding. fig. 3 variant 2 clear holding 3) varianta 3 mixed holding in addition to management tasks, mixed holding also performs other types of activities in the field of production, trade, research, finance or service activities. within the mixed-activity holding company there is a parent company (infrastructure) and companies engaged in the transport and traction of trains. fig. 4 variant 3 mixed holding 4) variant 4 – mixed holding – model of three independent companies: infrastructure, transport of passengers and transport of goods. criteria for selecting the most favorable model of restructuring and organization of railway companies are: k1 – model’s efficiency; k2 – the attractiveness of the model to attract an operator; k3 – satisfying the needs of transport market; k4 – compliance with eu directives; k5 – financial independence of the model; vesković et al./decis. mak. appl. manag. eng. 1 (2) (2018) 34-50 44 k6 – possibility of model realization. k1 – efficiency is the ability to achieve results and business goals. this means that the offered model should enable its efficient exploitation and maintenance. this criterion refers to management and functionality as well as the ability to use all the resources of the model in order to achieve the necessary effectiveness. the criterion should be maximized. k2 –“the attractiveness of the model to attract an operator” implies the ability of the model to provide an open access to infrastructure operators, the use of railway infrastructure by operators under equal conditions without discrimination. in this way, preconditions for multiple operators will be created. the criterion should be maximized. k3 – it refers to the possibility of the offered model to satisfy the needs of operators in the transport market in relation to the state and capacity of railway infrastructure capacities (permitted speed, throughput, electrification, permissible axial load, etc.). regardless of the operator's capability (transport time, prices, frequency, reliability, etc.), the state of the infrastructure significantly influences the definition of customers' demands on the market (population and economy). the criterion should be maximized. k4 – certain models can be fully or to some extent harmonized with eu directives aimed at the creation of a single transport market, its liberalization and ensuring the independence of the management of railway undertakings. the criterion should be maximized. k5 – the infrastructure manager should be a functionally sound and financially stable company. the state allocates financial resources to infrastructure managers only for the development of railway infrastructure, and not for workers' salaries. the k5 criterion should assess the extent to which the model can satisfy these requirements. the criterion should be maximized. k6 – it refers to the possibility of realization of the observed model from the aspect of legislation, environment, support of political, social and other participants, etc. the criterion should be maximized. in the first phase of the study, the ranking of criteria was determined based on 16 expert grades in the delphi method. after that, a total of 14 decision-makers determined the mutual impact of the criteria, which is a prerequisite for the application of the swara method used to determine relative weight values of the criteria. after applying eqs. (1) (3), we have obtained weight values of the criteria shown in table 1. table 1 calculation procedure and results of weight values of criteria obtained using swara method sj kj=sj+1 qj wj k3 1.000 1.000 1.000 0.224 k1 0.100 1.100 0.909 0.203 k5 0.148 1.148 0.792 0.177 k2 0.179 1.179 0.672 0.150 k4 0.168 1.168 0.575 0.129 k6 0.102 1.102 0.522 0.117 4.471 1.000 table 1 shows, in the first column, the alternative's ranking that was previously determined using the delphi method, while the second column represents the effect of the previous one in relation to the next criterion, which is the average value of the evaluation of the railway management model by using a new integrated model delphiswara-mabac 45 response of the decision-makers. based on the obtained results using the swara method, the most important is the first criterion of the model's efficiency, while the second criterion is the attractiveness of the model to attract operators elsewhere with a slightly lower value. the general conclusion when it comes to the value of the criteria considered in this study is that all the criteria have sufficient influence on the decision-making with respect to their values. in future research related to determining the significance of the criteria, it is recommended to use the rough swara method developed in (zavadskas et al. 2018). after obtaining the relative criteria values, it is necessary to determine the most favorable variant of railways management in bosnia and herzegovina. for this purpose, the mabac method is used. all 14 decision-makers who had previously determined the mutual impact of the criteria have also carried out the evaluation of the alternatives. by applying the geometric middle of all the answers, the initial decision matrix is shown in table 2. table 2 starting matrix of decision-making based on the responses from 14 decision-makers c1 c2 c3 c4 c5 c6 a1 4.238 3.918 4.530 3.710 4.502 4.810 a2 5.142 4.786 4.698 5.433 5.174 6.706 a3 6.470 4.909 5.463 6.069 6.020 6.392 a4 4.341 7.471 4.900 7.796 5.051 3.580 after the initial decision matrix, eqs. (6) and (7) must be applied in order to start normalization. since in this study all the criteria belong to a group of benefits for normalization, equation (6) is used, and the normalized matrix shown in table 3 is obtained. table 3 normalized matrix c1 c2 c3 c4 c5 c6 a1 0.000 0.000 0.000 0.000 0.000 0.393 a2 0.405 0.244 0.180 0.422 0.442 1.000 a3 1.000 0.279 1.000 0.577 1.000 0.899 a4 0.046 1.000 0.396 1.000 0.361 0.000 table 4 shows a more difficult normalized matrix obtained by multiplying the normalized matrix from table 3 with the weight values of the criteria obtained using the swara method. equation (8) is used to aggravate the normalized matrix. in addition, in the integral part of table 4, the values of the bordering approximative area are obtained by applying equation (9). table 4 weighted normalized matrix v c1 c2 c3 c4 c5 c6 a1 0.224 0.203 0.177 0.150 0.129 0.163 a2 0.314 0.253 0.209 0.214 0.186 0.234 a3 0.447 0.260 0354 0.237 0.257 0.222 a4 0.234 0.407 0.247 0.301 0.175 0.117 g 0.293 0.272 0.239 0.219 0.181 0.177 vesković et al./decis. mak. appl. manag. eng. 1 (2) (2018) 34-50 46 table 5 shows the distance matrix of the alternative from the bordering approximative area (q) obtained by applying eqs. (12) and (13) and the ranking of the model variant using equation (15). table 5 the distance matrix is an alternative to bordering approximative area (q) and alternative's range q=v-g c1 c2 c3 c4 c5 c6 is rank a1 -0.069 -0.068 -0.062 -0.068 -0.052 -0.014 -0.334 4 a2 0.021 -0.019 -0.030 -0.005 0.004 0.056 0.029 3 a3 0.154 -0.012 0.116 0.018 0.076 0.045 0.398 1 a4 -0.059 0.135 0.009 0.082 -0.006 -0.060 0100 2 after executing the budget and applying the hybrid model, the best-ranked variant of the railway management is a variant number 1 which implies that the model of a unified (independent) legal entity has a simple organizational structure with a high degree of centralization, while the worst ranking option is number 3. 4 sensitivity analysis in order to determine the stability of the previously obtained results using the hybrid delphi-swara-mabac model, the budget calculation for the multi-criteria model was carried out with four more aras methods (zavadskas and turksis, 2010), waspas (zavadskas et al. 2012), saw (maccrimmon, 1968, stević et al. 2017a), and edas (keshavarz ghorabaee et al., 2015; stević et al. 2016; stević et al. 2017b). the results of the sensitivity analysis are shown in table 6. table 6 the results of the sensitivity analysis mabac aras waspas saw edas v1 -0.334 4 0.644 4 0.381 4 0.652 4 0.652 4 v2 0.029 3 0.787 3 0.463 3 0.793 3 0.793 3 v3 0.398 1 0.884 1 0.521 1 0.891 1 0.891 1 v4 0.100 2 0.836 2 0.486 2 0.833 2 0.833 2 based on the obtained results of the sensitivity analysis, the model's stability and obtained levels of variant solutions are confirmed because in applying all the four methods in the analysis of sensitivity, the levels do not change, that is, each variant retains its initial level. 5 conclusion evaluation of the level of railway market restructuring and reforms is an important process that shows the phase in which a country is. level alignment is of great importance to the countries in the environment because in this way a more stable transport market can be established. this is especially important for the railways located in strong transit directions and pan-european corridors. the european rail system should not be "scraped" on the non-synchronized rail national reform levels since this does not contribute to the creation of a single european transport market, and thus to the desired open rail market. in evaluation of the railway management model by using a new integrated model delphiswara-mabac 47 addition, such a situation inevitably leads to a reduction in the quality of rail services and an uncompetitive position of the railways in the transport market. eu directives provide no unique solution in terms of selecting rail management models. the issue this document deals with is the development of a general model that provides a solution to the institutional management of rail national companies. quantified relevant criteria have been identified for the choice of management model. the synchronization of railway reforms has been promoted through various institutions, and the implementation of reforms and liberalization has often been carried out on the basis of experts' opinions or the application of inadequate methods. this document presents a new way of determining adequate restructuring model for railway national companies, which implies the integration of the delphi, swara and mabac methods. the three-phase hybrid model takes into account all the relevant facts and aspects that need to be considered in such research, and the integration of the above-mentioned methods is also one of the contributions of the work. in order to determine the stability of the model, a sensitivity analysis was performed in which four other methods of multicriteria analysis were applied, the results of which have confirmed the obtained results using the hybrid model proposed in this document. acknowledgements this paper is supported by ministry of science and technological development of the republic of serbia (project no. 36012). references aydin, n. (2017). a fuzzy-based multi-dimensional and multi-period service quality evaluation outline for rail transit systems. transport policy, 55, 87-98. cer (2005.), reforma železnice u evropi, brussels, belgium ghorshi nezhad, m. r., hashemkhani zolfani, s., moztarzadeh, f., zavadskas, e. k., & bahrami, m. (2015). planning the priority of high tech industries based on swarawaspas methodology: the case of the nanotechnology industry in iran. economic research-ekonomska istraživanja, 28(1), 1111-1137. hashemkhani zolfani, s., & bahrami, m. (2014). investment prioritizing in high tech industries based on swara-copras approach. technological and economic development of economy, 20(3), 534-553. hashemkhani zolfani, s., salimi, j., maknoon, r., & kildiene, s. (2015). technology foresight about r&d projects selection; application of swara method at the policy making level. engineering economics, 26(5), 571-580. heidary dahooie, j., beheshti jazan abadi, e., vanaki, a. s., & firoozfar, h. r. (2018). competency‐based it personnel selection using a hybrid swara and aras‐g methodology. human factors and ergonomics in manufacturing & service industries. 28(1), 5-16. hong, h., panahi, m., shirzadi, a., ma, t., liu, j., zhu, a. x., ... & kazakis, n. (2017). flood susceptibility assessment in hengfeng area coupling adaptive neuro-fuzzy inference system with genetic algorithm and differential evolution. science of the total environment. 621, 1124-1141 vesković et al./decis. mak. appl. manag. eng. 1 (2) (2018) 34-50 48 ighravwe, d. e., & oke, s. a. (2017). sustenance of zero-loss on production lines using kobetsu kaizen of tpm with hybrid models. total quality management & business excellence, 1-25. jain, n., & singh, a. r. (2017). fuzzy kano integrated mcdm approach for supplier selection based on must be criteria. international journal of supply chain management, 6(2), 49-59. juodagalvienė, b., turskis, z., šaparauskas, j., & endriukaitytė, a. (2017). integrated multi-criteria evaluation of house’s plan shape based on the edas and swara methods. engineering structures and technologies, 9(3), 117-125. karabasevic, d., stanujkic, d., urosevic, s., popovic, g., & maksimovic, m. (2017). an approach to criteria weights determination by integrating the delphi and the adapted swara methods. management: journal of sustainable business and management solutions in emerging economies, 22(3), 15-25. keršuliene, v., zavadskas, e. k., & turskis, z. (2010). selection of rational dispute resolution method by applying new step-wise weight assessment ratio analysis (swara). journal of business economics and management, 11(2), 243-258. keshavarz ghorabaee, m., amiri, m., zavadskas, e. k., & antucheviciene, j. (2018). a new hybrid fuzzy mcdm approach for evaluation of construction equipment with sustainability considerations. archives of civil and mechanical engineering, 18(1), 32-49. keshavarz ghorabaee, m., zavadskas, e. k., olfat, l., & turskis, z. (2015). multicriteria inventory classification using a new method of evaluation based on distance from average solution (edas). informatica, 26(3), 435-451. krmac, e., & djordjević, b. (2017). an evaluation of indicators of railway intelligent transportation systems using the group analytic hierarchy process. electronics science technology and application, 4(2). kuo, s. y., & chen, s. c. (2015). transportation policy making using mcdm model: the case of hualien. 44(1), 25-44. leonardi, g. (2016). a fuzzy model for a railway-planning problem. applied mathematical sciences, 10(27), 1333-1342. maccrimmon, k. r. (1968). decisionmaking among multiple-attribute alternatives: a survey and consolidated approach (no. rm-4823-arpa). rand corp santa monica ca. mavi, r. k., goh, m., & zarbakhshnia, n. (2017). sustainable third-party reverse logistic provider selection with fuzzy swara and fuzzy moora in plastic industry. the international journal of advanced manufacturing technology, 91(5-8), 24012418. nassereddine, m., & eskandari, h. (2017). an integrated mcdm approach to evaluate public transportation systems in tehran. transportation research part a: policy and practice, 106, 427-439. pamučar, d., & ćirović, g. (2015). the selection of transport and handling resources in logistics centers using multi-attributive border approximation area comparison (mabac). expert systems with applications, 42(6), 3016-3028. evaluation of the railway management model by using a new integrated model delphiswara-mabac 49 panahi, s., khakzad, a., & afzal, p. (2017). application of stepwise weight assessment ratio analysis (swara) for copper prospectivity mapping in the anarak region, central iran. arabian journal of geosciences, 10(22), 484. ranjan, r., chatterjee, p., & chakraborty, s. (2016). performance evaluation of indian railway zones using dematel and vikor methods. benchmarking: an international journal, 23(1), 78-95. ruzgys, a., volvačiovas, r., ignatavičius, č., & turskis, z. (2014). integrated evaluation of external wall insulation in residential buildings using swara-todim mcdm method. journal of civil engineering and management, 20(1), 103-110. sang, j. j., wang, x. f., sun, h. s., & li, m. l. (2015). selection and evaluation of railway freight third-party-logistics based on f-ahp method. in applied mechanics and materials (vol. 744, pp. 1878-1882). trans tech publications. santarremigia, f. e., molero, g. d., poveda-reyes, s., & aguilar-herrando, j. (2018). railway safety by designing the layout of inland terminals with dangerous goods connected with the rail transport system. safety science. stević, ž., pamučar, d., kazimieras zavadskas, e., ćirović, g., & prentkovskis, o. (2017a). the selection of wagons for the internal transport of a logistics company: a novel approach based on rough bwm and rough saw methods. symmetry, 9(11), 264. stević, ž., pamučar, d., vasiljević, m., stojić, g., & korica, s. (2017b). novel integrated multi-criteria model for supplier selection: case study construction company. symmetry, 9(11), 279. stević, ž., tanackov, i., vasiljević, m., & vesković, s. (2016, september). evaluation in logistics using combined ahp and edas method. in proceedings of the xliii international symposium on operational research, belgrade, serbia (pp. 20-23). urosevic, s., karabasevic, d., stanujkic, d., & maksimovic, m. (2017). an approach to personnel selection in the tourism industry based on the swara and the waspas methods. economic computation & economic cybernetics studies & research, 51(1). 75-88. vafaeipour, m., hashemkhani zolfani, s., varzandeh, m. h. m., derakhti, a., & eshkalag, m. k. (2014). assessment of regions priority for implementation of solar projects in iran: new application of a hybrid multi-criteria decision making approach. energy conversion and management, 86, 653-663. valipour, a., yahaya, n., md noor, n., antuchevičienė, j., & tamošaitienė, j. (2017). hybrid swara-copras method for risk assessment in deep foundation excavation project: an iranian case study. journal of civil engineering and management, 23(4), 524-532. zarbakhshnia, n., soleimani, h., & ghaderi, h. (2018). sustainable third-party reverse logistics provider evaluation and selection using fuzzy swara and developed fuzzy copras in the presence of risk criteria. applied soft computing. vesković et al./decis. mak. appl. manag. eng. 1 (2) (2018) 34-50 50 zavadskas, e. k., & turskis, z. (2010). a new additive ratio assessment (aras) method in multi-criteria decision‐making. technological and economic development of economy, 16(2), 159-172. zavadskas, e. k., turskis, z., antucheviciene, j., & zakarevicius, a. (2012). optimization of weighted aggregated sum product assessment. elektronika ir elektrotechnika, 122(6), 3-6. zavadskas, e. k., stević, ž., tanackov, i., pretkovskis, o., (2018). a novel multi-criteria approach – rough step-wise weight assessment ratio analysis method (r-swara) and its application in logistics, studies in informatics and control, 27, 1. 97-106. © 2018 by the authors. submitted for possible open access publication under the terms and conditions of the creative commons attribution (cc by) license (http://creativecommons.org/licenses/by/4.0/). plane thermoelastic waves in infinite half-space caused decision making: applications in management and engineering vol. 3, issue 2, 2020, pp. 131-148. issn: 2560-6018 eissn: 2620-0104 doi: https://doi.org/10.31181/dmame2003131r * corresponding author. e-mail addresses: markoradovanovicgdb@yahoo.com (m. radovanović), aca.r.0860.ar@gmail.com (a. ranđelović), antras1209@gmail.com (ž. jokić) application of hybrid model fuzzy ahp vikor in selection of the most efficient procedure for rectification of the optical sight of the longrange rifle marko radovanović1*, aca ranđelović1 and željko jokić1 1 university of defence, military academy, belgrade, serbia received: 12 july 2020; accepted: 25 september 2020; available online: 10 october 2020. original scientific paper abstract: the paper presents a decision support model when choosing the most efficient rectification procedure of the optical sight of the long range rifle. the model is based on the fuzzy ahp method and the vikor method. using the fuzzy ahp method, coefficient values of the criteria were defined. fuzzification of the ahp method was performed by combining data obtained from experts comparison of criteria in pairs and the degree of confidence in the comparison. using the vikor method, the best alternative was selected. through the paper, the criteria that condition this choice are elaborated and the application of the method in a specific situation is presented. also, the paper presents the sensitivity analysis of the developed model. key words: fuzzy ahp, vikor, multi-criteria decision-making, rectification, long-range rifle. 1. introduction the serbian army is a complex organizational system, where the decision-making process is a very important element. therefore, the application of multi-criteria decision-making methods is an indispensable segment in this process. this paper presents a model for selecting the most efficient rectification method of a 12.7 mm m93 long range rifle optical sight. a long-range rifle is a weapon to support infantry platoons in attack and defense. it is a type of small arms that is specially designed for fire action on people, noncombat and lightly armored combat vehicles, at distances up to 1800 m (randjelovic et al. 2019a). it is a weapon of high accuracy and precision and achieves its firepower on targets by direct shooting. successful rectification of sights achieves the accuracy and precision of a longrange rifle. based on accuracy and precision, the probability of hitting the target is mailto:markoradovanovicgdb@yahoo.com mailto:aca.r.0860.ar@gmail.com mailto:antras1209@gmail.com radovanović et al./decis. mak. appl. manag. eng. 3 (2) (2020) 131-148 132 determined, which affects the efficiency of long-range rifle 12.7 mm m93 solving fire tasks in operations. having in mind the importance of rectification of the optical sight of a long range rifle for performing combat actions, the most efficient rectification procedure was selected by applying the method of multi criteria decision making. 2. problem description through this paper, a model is presented which determines the most efficient and most economical procedure of rectification of the optical sight of a long range rifle. procedures for rectification of the optical sight of the 12.7 mm m93 long-range rifle are defined on the basis of the provisions of the technical and temporary instructions for the optical sight of the long-range rifle and the instructions for use for the optical sight of the long-range rifle (long-range rifle 12.7 mm m93 (description, handling and maintenance), 2010; purpose, description and handling of the 12.7 mm longrange rifle, 1999; the long-rifle optical sight on m93 for the long-range rifle "zastava" 12.7 mm m93, 1998). in addition to the above, as one alternative, a modeled rectification procedure was taken, which was reached on the basis of the results of previous research in this area, presented in detail in radovanović (2016), radovanović et al. (2016) and randjelovic et al. (2019a). the aim of this paper is to select the most efficient rectification procedure using the method of multi-criteria decision-making in order to indirectly increase the efficiency of realization of fire tasks with a long-range rifle. the results used for the analysis were obtained on the basis of realized shootings at the training field "pasuljanske livade". most units of the serbian army for the process of rectification of the optical sight of the long-rifle 12.7 mm m93, use the model shown in the temporary instructions for long-range rifle (purpose, description and handling of long-range rifle 12.7 mm, 1999). to a lesser extent, other methods of rectification are used in the units. according to the above, it can be concluded that there is no universality regarding the rectification of the optical sight of a long-range rifle. comparisons regarding quality, but also other parameters of rectification have not been performed so far. in other words, there are several satisfactory ways of rectification, but so far no detailed analysis has been made as to which way (model) would be the most acceptable from several aspects (quality, price, required resources, etc.). accordingly, it is clear that the presented problem is an ideal field for the application of multi-criteria decisionmaking methods. in the literature available to the authors, it was found that there is not a large number of papers dealing with this issue. radovanović (2016) models a new rectification procedure and the software program correction of sights. in the paper radovanović et al. (2016) performed a numerical analysis of different ways of rectification in relation to certain criteria such as ammunition consumption, time and price of rectification. randjelovic et al. (2019a) show the dependence of the rectification procedure on the execution of fire tasks in a counter-terrorist operation. the available literature describes only a part of the criteria on the basis of which the most efficient rectification procedure is selected. 3. description of applied methods the hybrid model, applied when solving the problem of choosing the most efficient rectification method of the long range rifle optical sight, was defined by a application of hybrid model fuzzy ahp vikor in selection of the most efficient procedure ... 133 combination of the fuzzy ahp and vikor methods. this part of the paper describes the methods used in the paper. the fuzzy ahp method was used to define the coefficient values, while the vikor method was used to select the best alternative. figure 1 shows the phases through which this model was realized. figure 1. appearance of the model for rectification of the optical sight of a long-range rifle 3.1. fuzzy ahp method the ahp method was developed by thomas saaty (1980). to date, this method has undergone a large number of modifications (božanić et al., 2013; stević et al., 2017; petrović et al., 2018; chatterjee et al., 2019; afriliansyah et al., 2019; osintsev et al., 2020; zhu et al., 2020;), but in some cases it is still used in its original form (radovanović et al., 2019; radovanović and stevanović, 2020; ranđelović et al., 2019b) both in the individual (badi and abdulshahed, 2019) and in group decision making (srđević and zoranović, 2003). analytical hierarchical process is a method based on the decomposition of a complex problem into a hierarchy, with the goal at the top, criteria, sub-criteria and alternatives at the levels and sublevels of the hierarchy (saaty, 1980). for comparisons in pairs, which is the basis of the ahp method, the saaty’s scale is usually used, table 1. table 1. saaty’s pair-wise comparison scale standard values definition inverse values 1 same meaning 1 3 weak dominance 1/3 5 strong dominance 1/5 7 very strong dominance 1/7 9 absolute dominance 1/9 2, 4, 6, 8 intermediate values 1/2, 1/4, 1/6, 1/8 the comparison in pairs leads to the initial decision matrices. the saaty’s scale is most commonly used to determine the coefficient values of the criteria, but can also be used to rank alternatives. radovanović et al./decis. mak. appl. manag. eng. 3 (2) (2020) 131-148 134 very often when taking values from the saaty’s scale in the pair-wise comparison process, decision makers hesitate between the values they will assign to a particular comparison. in other words, it happens that they are not sure of the comparison they are making. due to the above, various modifications of the saaty’s scale are often made. one of them is the application of fuzzy numbers. there are different approaches in the fuzzification of the saaty scale, and in principle they can be divided into two groups: sharp (hard) and soft fuzzification (božanić et al., 2015b). fasification can be done with different types of fuzzy numbers, and is most often done using a triangular fuzzy number figure 2. t1 t2 t3 1  t x    2 1, t x x t     1 1 2 2 1 , t x x t t x t t t         3 2 3 3 2 , t x t x t x t t t         1 0, t x x t     3 0, t x x t    0 αt1 αt2 figure 2. triangular phase number t (pamučar et al., 2016b) by "sharp" fuzzification is meant that a fuzzy number  1 2 3, ,t t t t is a predetermined confidence interval, that is, it is predetermined that the value of the fuzzy number will not be greater than 3 t or less than 1 t (božanić et al., 2015b). based on the predefined fuzzy saaty’s scale, a comparison is made in pairs. in soft fuzzification, the confidence interval of the values in the saaty’s scale is not predetermined, but is defined during the decision-making process, based on additional parameters. the definition of the coefficient values of the criteria in this paper was performed by applying the phased saaty’s scale presented in the works of božanić et al. (2016), pamučar et al. (2016a), božanić (2017), božanić et al. (2018), bojanic et al. (2018) and bobar et al. (2020). the starting elements of this fuzzification are (bobar et al., 2020): 1) introducing the fuzzy numbers instead of classic numbers of the saaty scale, 2) introducing the degree of confidence of decision makers/analysts/experts (dm/a/e) in the statements they make when comparing in pairs  . the degree of confidence () is defined at the level of each comparison in pairs. the value of the degree of confidence belongs to the interval 0,1, where =1 describes the absolute confidence of dm/a/e in the defined comparison. the decrease in the confidence of dm/a/e in the performed comparison is accompanied by a decrease in the degree of confidence ji. forms for calculating fuzzy numbers are given in table 2. application of hybrid model fuzzy ahp vikor in selection of the most efficient procedure ... 135 table 2. fuzzification of the saaty's scale using the degree of confidence (bobar et al., 2020) definition standard values fuzzy number inverse values of fuzzy number same meaning 1 (1, 1, 1) (1, 1, 1) weak dominance 3   3 , 3, 2 3 ji ji   1 2 3,1/ 3,1 3  ji ji strong dominance 5   5 , 5, 2 5 ji ji   1 2 5,1/ 5,1 5  ji ji very strong dominance 7   7 , 7, 2 7 ji ji   1 2 7 ,1/ 7,1 7  ji ji absolute dominance 9   9 , 9, 2 9 ji ji  1 (2 )9 ,1 / 9,1 9  ji ji intermediate values 2, 4, 6, 8   , , 2 , ji jix x x 2, 4, 6, 8x    1 2 ,1/ ,1  ji jix x x 2, 4, 6, 8x  an example of the appearance of a fuzzy number with different degrees of confidence is given in figure 3. for example, the value of low dominance from the saaty’s scale and degrees of confidence =1, =0.7 and =0.3 are taken. 0 1 0.7  53.5 6.5 b) 0 1 0.3  51.5 8.5 c ) 0 1 1  5 a) figure 3. dependence of fuzzy number on degree of confidence by introducing different values of the degree of confidence, the left and right distributions of fuzzy comparisons change according to the expression (bobar et al., 2020):           1 2 1 2 1 2 1 2 3 2 2 2 3 2 3 2 2 3 , , , 1 / 9, 9 , , , 1 / 9,9 2 , , , 1 / 9, 9                    t t t t t t t t t t t t t t t t t t t (1) where the value of t2 represents the value of the linguistic expression from the classical saaty’s scale, which in the fuzzy number has the maximum affiliation t2=1. fuzzy number     1 2 3, , , , 2   t t t t x x x ,  1, 9x  is defined by expressions (božanić, 2017): 1 , 1 1, 1             x x x t x x (2)  2 , 1, 9t x x   (3) radovanović et al./decis. mak. appl. manag. eng. 3 (2) (2020) 131-148 136    3 2 , 1, 9   jit x x (4) inverse fuzzy number     1 1 2 31/ ,1/ ,1/ 1 2 ,1/ ,1     ji jit t t t x x x ,  1, 9x  is defined as (božanić, 2017):          3 1 2 , 1 2 1 1 / 1 2 , 1, 9 1, 1 2 1                 ji ji ji ji x x t x x x (5) 21/ 1/ , 1/ 1, 9  t x x (6)  31 / 1 , 1 / 1, 9  jit x x (7) accordingly, the initial decision matrix has the following form (božanić et al., 2015a): 1 2 1 11 11 12 12 1 1 2 21 21 22 22 2 2 1 1 2 2 ; ; ; ; ; ; ; ; ;                       n n n n n n n n n n nn nn c c c c a a a a c a a a c a a a (8) where ji=ij. reaching the final results implies further application of the standard steps of the ahp method. at the end of the application, the fuzzy number is converted to a real number. numerous methods are used for this procedure (herrera and martinez, 2000). some of the known terms for defuzzification are (liou and wang, 1992; seiford, 1996): 3 1 2 1 1 (( ) ( )) / 3    a t t t t t (9)  3 2 11 / 2      a t t t (10) where  represents the optimism index, which can be described as the belief/ratio dm/a/e in decision-making risk. most often, the optimism index is 0, 0.5 or 1, which corresponds to the pessimistic, average or optimistic view of the decision maker (milićević, 2014). 3.2. vikor method vikor (višekriterijumsko kompromisno rangiranje) is a method of multicriteria decision-making whose use is very common. it was developed by serafim opricović (1986). it is suitable for solving various decision-making problems. it is especially emphasized for situations where criteria of a quantitative nature prevail. the vikor method starts from the "boundary" forms of lp metrics, where the choice of the solution that is closest to the ideal is made. the presented metric represents the distance between the ideal point f* and the point f (x) in the space of criterion functions (opricović, 1986). minimizing this metric determines a compromise solution. as a measure of the distance from the ideal point, the following is used: application of hybrid model fuzzy ahp vikor in selection of the most efficient procedure ... 137  *pl (f , f)       1/p pn * j jj=1 f -f (x) ,1 p (11) the vikor method has been applied in a large number of papers in its original form (nisel, 2014; kuo and liang, 2011; opricović and tzeng, 2004; jokić et al., 2019, radovanović et al. 2020), but also in fuzzy (chatterjeea and chakrabortyb, 2016; ince, 2007; shemshadi et al., 2011;) and a rough (li and song, 2016; wang et al. 2018) environment. when applying the vikor method, the following terms are used:  n – number of criteria  m – number of alternatives for multicriteria ranking  fij – the values of the i criterion function for the j alternative,  wj – the value of the j criterion function,  v – the weight of the strategy, meeting most of the criteria,  i – ordinal number of the alternative, i = 1, ..., m.,  j – ordinal number of the criteria, j = 1, ..., n,  qi – measure for multi-criteria ranking of the j alternative. for each alternative, there are qi values, after which the alternative with the lowest qi value is selected. the measure for multi-criteria ranking of the i action (qi) is calculated according to the expression (opricović, 1998): 1 i i q v* qs ( v )* qr   (12) where: * i i * s s qs s s     (13) * i i * r r qr r r     (14) by calculating the qsi, qri, and qi values for each alternative, it is possible to form three independent rankings. the qsi value, is a measure of deviation that displays the requirement for maximum group benefit (first ranking list). qri value is a measure of deviation that shows the requirement to minimize the maximum distance of an alternative from the "ideal" alternative (second ranking list). qi value represents the establishment of a compromise ranking list that combines qsi and qri values (third ranking list). by choosing a smaller or larger value for v (the weight of strategies to meet most criteria), the decision maker can favor the influence of qsi value or qri value in the compromise ranking list. for example, higher values for v (v > 0.5) indicate that the decision maker gives greater relative importance to the strategy of satisfying most of the criteria (nikolić et al., 2010). modeling the preferential dependence of criteria usually includes the weights of individual criteria. if the given values are weights w1,w2,…..,wn, the multi-criteria ranking by the vikor method is realized by using the measure si and ri. in the previous terms, the labels used have the following meanings:    * * 1 1 / n n i i i ij i i j ij j j s w f f f f w d         (15)    * *max / maxi i i ij i i j ij j j r w f f f f w d        (16) radovanović et al./decis. mak. appl. manag. eng. 3 (2) (2020) 131-148 138 i = 1,2, ..., m, j=1,2,...,n, and where: * * * min max min max max min i i i i i i i i ij i ij i s s s s r r r r f f f f          alternative ai is better than alternative ak according to j criterion, if:  ij kj f f (for max fj, that is when the criterion has a maximum requirement),  ij kj f f (for min fj, that is when the criterion has a minimum requirement). in multi-criteria ranking by the vikor method, alternative ai is better than alternative ak (in total, according to all criteria), if: qi 0.25 mm) (iii) standard holes (l/d ≤ 20) (where l/d = length/diameter = slenderness ratio) (iv) standard holes (l/d > 20) (b) cavities (i) precision (aspect ratio ≤ 5) (ii) standard (c) surfacing (i) double contouring (ii) surface of revolution (d) through cutting (i) shallow (depth of cut < 40 µm) (ii) deep (depth of cut > 40 µm) chakraborty and kumar/decis. mak. appl. manag. eng. 4 (1) (2021) 194-214 200 (e) finishing. the entire database containing the capabilities of all the considered ntm processes with respect to workpiece material, and shape and sub-shape features to be generated is stored in ms-access linked with vbasic, and the decisions regarding values of various responses and settings of the ntm process parameters are arrived at based on sets of simple if-then rules. 4. illustrative examples in order to demonstrate the applicability and usefulness of the developed decision model in the domain of ntm processes, the following three examples are cited. 4.1 example 1: electro-discharge machining in this example, it is supposed that precision cavities with aspect ratio ≤ 5 need to be generated on inconel 718 alloy using edm process. for this machining application, the corresponding input window in the form of a graphical user interface is shown in figure 2. figure 2. input window for the first example pressing of the ‘ok’ functional key then leads the end user to the next window, as exhibited in figure 3, where the lists of all the important edm process parameters, i.e. peak current, open circuit voltage, pulse-on time, duty factor, flushing pressure, pulseoff time, dielectric level, tool electrode lift time, polarity, type of the tool and flushing speed, and responses, like surface crack density, tool wear ratio (twr), perpendicularity error (pe), material removal rate (mrr), surface roughness (sr), overcut (oc), electrode wear rate, edge deviation, white layer thickness and microharness are displayed. in this example, at first, the end user selects peak current, open circuit voltage, pulse-on time, duty factor, polarity and type of the tool as the controllable process parameters as available in the considered edm set-up. on the other hand, based on the end product requirements, surface crack density, twr, pe, mrr, sr and micro-hardness are treated with utmost importance. the end user can also choose all the edm process parameters and responses while pressing the ‘select all’ key. now, when the ‘next’ button is pressed, in the subsequent window, as depicted in figure 4, the end user is opted to enter the appropriate values for the preselected edm process parameters based on which the approximate values of the shortlisted responses would be predicted. in this example, the end user chooses the options as peak current = 9 a, open circuit voltage = 60v, pulse-on time = 100 µs, duty factor = 70%, polarity = positive and tool material = copper. now, when the ‘ok’ button is pressed, this decision model would guide the end user to have an idea about various responses envisaged as surface crack density = 0.0055-0.0057 µm/µm2, mrr = 89.12-89.18 mm3/min, sr = 6.2-6.7 µm, pe = 0.09-1.11%, micro-hardness = 392.40392.50 hv and twr = 0.0012-0.0016. development of an intelligent decision model for non-traditional machining processes 201 figure 3. window for selection of edm process parameters and responses in figure 3, when the end user presses the ‘response to pp’ functional key, the settings of the preselected edm process parameters can be predicted based on the chosen values of the shortlisted edm responses. as exhibited in figure 5, the end user desires to have high value of mrr (59.881-89.164 mm3/min), and low values of electrode wear rate (0.011-0.059 mm3/min), sr (2.133-4.866 µm), oc (0.03-0.19 mm), surface crack density (0.008-0.011 µm/µm2), white layer thickness (16.64617.866 µm) and micro-hardness (352.600-407.755 hv). based on these input response values, the developed decision model predicts the related edm process parameters as open circuit voltage = 47-50 v, peak current = 10.5-11.5 a, pulse-on time = 190-200 µs, duty factor = 78-82%, flushing pressure = 0.15-0.25 bar, type of the tool = copper, polarity = positive and pulse-off time = 25-35 µs. it is worthwhile to mention here that among the considered responses, material removal rate is the sole beneficial attribute requiring its higher value, whereas, lower values for the remaining non-beneficial responses are preferred. based on the past experimental data on edm processes (ray, 2016; datta et al., 2017), all the related response values are classified into three groups, i.e. low, medium and high so as to relieve the end user in providing an exact value for a specific response which may sometimes be a difficult task. according to the end product requirements, the end user can now be able to opt for only low, medium or high value for a particular response of interest. the derived parametric settings of the considered edm process are only tentative. in order to achieve most accurate target values of the responses, fine-tuning of those parameters may often be needed. chakraborty and kumar/decis. mak. appl. manag. eng. 4 (1) (2021) 194-214 202 figure 4. prediction of responses based on edm process parameters figure 5. prediction of edm process parameters based on the responses in figure 6, when the end user opts for performing deep through cutting operation (depth of cut > 40 µm) on ceramic materials using the edm process, an error message would appear indicating the incapability of edm process to generate the chosen shape feature on ceramics. it can be interestingly noticed that with increasing values of all the edm process parameters, mrr would also increase. higher values of gap voltage, peak current and pulse-on time are all responsible for the available discharge energy to increase, resulting in more melting and vaporization of material from the workpiece. the impulsive force in the spark gap also increases, which is responsible for higher mrr (gopalakannan et. al., 2012). increments in gap voltage and peak current generate stronger discharge energy, creating higher temperature and formation of larger craters on the machined surface, resulting in poor surface quality (kiyak & çakır, 2007) similarly, twr increases with higher values of gap voltage, peak current and cycle time. at these higher parametric settings, there are micro tool wears development of an intelligent decision model for non-traditional machining processes 203 due to availability of higher spark energy density at the machining zone. generally, lower settings of these edm process parameters tend to enhance the possibility of carbon deposition on the tool surface, which finally helps in lowering twr value (lin & lee., 2008). the pe in the machined components occurs due to non-uniform undercut and oc which can be effectively controlled by proper settings of different edm process parameters. with increasing values of gap voltage and peak current, pe shows an increasing trend. at higher gap voltage and peak current, there are occurrences of secondary spark discharges caused by poor flushing as well as sporadic machining which are responsible for inferior pe. during edm operation, oc occurs due to side erosion and removal of debris. at higher settings of voltage, peak current and pulse-on time, availability of higher gap voltage and gap width allows breakdown of the dielectric at a wider gap due to higher electric field. at higher gap voltage and peak current, spark energy density would be more with a faster machining rate, which is also responsible for higher oc. hence, the predicted parametric intermix for the edm process would minimize the oc of the machined components. the above parametric setting can also be validated based on the observations of the past researchers (ray, 2016; prasad & chakraborty, 2018). figure 6. an error message for edm process 4.2 example 2: ultrasonic machining here, the end user desires to generate standard holes with slenderness ratio of less than equal to 20 on titanium (astm grade i) work material while utilizing usm process. figure 7 exhibits the input window for this example. in figure 8, from a list of the available controllable parameters for the usm process, type of the abrasive material, abrasive grit size, amplitude of vibration, machining time, type of the tool material, power rating and slurry concentration are first shortlisted. on the other hand, conicity, mrr, sr, tool wear rate (tw) and micro-harness are opted as the important responses. depending on the requirements, the entire lists for the available usm process parameters and responses can also be selected. the entire information related to these usm process parameters and responses are accumulated from (kumar & khamba, 2010; kataria et al., 2017). chakraborty and kumar/decis. mak. appl. manag. eng. 4 (1) (2021) 194-214 204 figure 7. input window for example 2 now, in figure 8, when the ‘next’ functional key is pressed, the developed decision model would seek for the values of the shortlisted usm process parameters in another window, as portrayed in figure 9. figure 8. window for selection of usm process parameters and responses in this case, the end user chooses the values of different usm parameters as type of the abrasive material = boron carbide, abrasive grit size = 280, amplitude of vibration = 25 µm, machining time = 8.70 min, type of the tool material = tungsten carbide, power rating = 550 w and slurry concentration = 45%. the drop-down menu attached with each of the process parameters guides the user to opt for the most apposite value as available in a particular usm set-up. development of an intelligent decision model for non-traditional machining processes 205 figure 9. prediction of responses based on usm process parameters based on these requirements, the developed system predicts the responses as conicity = 0.023-0.038º, sr = 0.78-0.85 µm, mrr = 0.025-0.035 mm3/min, tw = 0.981.05 mm3/min and micro-hardness = 155-160 hv. now, when the ‘response to pp’ functional key is pressed in figure 8, this system would jump to a new window, as shown in figure 10, where the end user is asked to input the desired values of the preselected responses in order to guide him/her about the tentative settings of different usm process parameters. figure 10. prediction of usm process parameters based on the responses here, high value of mrr (0.066-0.870mm3/min), and low values for conicity (0.014-0.032º), out-of-roundness (0.200-0.285 mm), sr (0.48-0.87 µm) and hole oversize (0.075-0.265mm) are sought by the end user. depending on these requirements, it advises the user to set the corresponding usm parameters as type of chakraborty and kumar/decis. mak. appl. manag. eng. 4 (1) (2021) 194-214 206 the abrasive material = silicon carbide, abrasive grit size = 400, type of the tool material = hss, power rating = 400-450 w, slurry concentration = 35-37%, slurry flow rate = 6.5-7.5 l/min and feed rate = 1.12-1.28 mm/min. in order to achieve more accurate machining performance, fine-tuning of the settings of the considered usm process parameters may often be required. when the end user chooses the same usm process for generation of precision holes (d ≤ 0.25 mm) to be machined on aluminium work material, an error message, as shown in figure 11, would automatically be generated by the system indicating the fact that it cannot machine precision holes on aluminium material. figure 11. a typical error message for usm process in usm process, when the amplitude of vibration increases, energy at the tool tip also increases, resulting in higher sr due to increased impact of the abrasive particles on the workpiece. furthermore, tw also increases due to increase in the slurry flow rate containing harder abrasive particles, which are bombarded on the tool tip. the cavitation effects also lead to an increase in tw. with increase in amplitude of vibration, there is an increment in mrr as higher amplitude attributes to higher momentum imparted to the abrasive particles before striking the workpiece. it raises the energy with which the abrasive particles collide on the work surface and hence, the micro-crack or micro-crater created by each impact facilitates the material removal process. on the other hand, mrr decreases because the successive impacts between the abrasive grains and the work material may lead to large amount of plastic deformation resulting in the formation of a work-hardened layer, causing reduction in mrr (bhosale et al., 2014). an increment in slurry concentration is responsible for more impact on the work surface leading to higher sr. this also causes an increase in tw since more abrasive particles come into contact with the tool over a given period of time. however, the material removal tendency decreases because of the loss of energy possessed by the abrasives in the slurry. as the number of particles between the tool and the work surface increases due to higher slurry concentration, loss of energy due to interparticle collision may prevail during this phenomenon (kataria et al., 2017; chakraborty et al., 2020). 4.3 example 3: plasma arc machining in this example, deep through cutting operation with depth of cut > 40 µm needs to be performed on a workpiece made of stainless steel using pam process. in order to satisfy these requirements, the corresponding ntm process, work material, shape feature and sub-shape feature are accordingly selected in figure 12. development of an intelligent decision model for non-traditional machining processes 207 figure 12. input window for example 3 for the pam process, based on an extensive survey of the existing literature (xu et al., 2002; das et al., 2014; adalarasan et al., 2015; ramakrishnan et al., 2018), arc voltage, cutting current, cutting speed, feed rate, torch stand-off distance, plasma gas pressure and pierce height are identified as the predominant control parameters influencing its machining performance. on the other hand, the important responses are shortlisted as conicity, chamfer, dross, heat affected zone (haz), kerf width, mrr and sr. now, in figure 13, the end user preselects arc voltage, cutting current, feed rate and torch stand-off distance as the available pam process parameters, and chamfer, dross, kerf width and sr as the desired responses. figure 13. window for selection of pam process parameters and responses the values of these four shortlisted pam process parameters are set as arc voltage = 120 v, cutting current = 42.5 a, feed rate = 945 mm/min and torch stand-off distance = 2.5 mm, as exhibited in figure 14. based on this parametric combination, the developed decision model predicts the shortlisted responses as chamfer = 1.80-1.85 mm, dross = 3.60-3.64 mm2, kerf width = 2.70-2.75 mm and sr = 0.76-0.85 µm. thus, chakraborty and kumar/decis. mak. appl. manag. eng. 4 (1) (2021) 194-214 208 this system would help the process engineers to have an idea about the achievable values of different responses based on a preselected set of parametric combinations. figure 14. prediction of responses based on pam process parameters in figure 13, if the end user presses the ‘response to pp’ functional key, a new input window, as shown in figure 15, would now be available where the ranges of values for different responses can be set according to the end product requirements. in this example, the end user opts for high value of mrr (2.06-2.80 mm3/min), and low values for conicity (0.009-0.021º), haz (325-400 µm), chamfer (1.00-1.32 mm), dross (0.45-3.49 mm2), kerf width (1.93-2.53 mm) and sr (0.724-0.875 µm). now, based on these response requirements, the developed system would advise the process engineer to set different parameters of the pam process as feed rate = 930950 mm/min, cutting speed = 2260-2280 mm/min, plasma gas pressure = 4.57-4.92 kg/cm2, arc voltage = 115-125 v and torch stand-off distance = 2.5-4.5 mm. these are only the tentative settings of the considered pam setup. the process engineer may require to fine-tune these settings in order to achieve more accurate results. development of an intelligent decision model for non-traditional machining processes 209 figure 15. prediction of pam process parameters based on the responses as shown in figure 16, when the end user wants to machine precision holes on refractories using the available pam process, the system would automatically generate an error message highlighting its inability to machine the specified work material. figure 16. an error message for pam process in pam process, torch stand-off distance has the strongest effect on the quality characteristics. stand-off distance is one of the crucial parameters in pam process as it controls sr and conicity of the cut edge. it has also been observed that cutting current also influences the haz of the cut edge. it greatly influences sr of the cut due to the fact that the plasma gas beam is not of cylindrical shape but resembles the shape of a reversed candle flame. therefore, depending on the relative position of the plasma to the workpiece surface, the surface quality is drastically affected due to thermal properties of the material (salonitis & vatousianos, 2012). the mrr increases with an increase in gas pressure and high gas flow because it leads to an increase in mean arc voltage and its fluctuations as more heat is transferred into the workpiece, and chakraborty and kumar/decis. mak. appl. manag. eng. 4 (1) (2021) 194-214 210 consequently, sr reduces. however, mrr remains constant with an increase in standoff distance as there is a slight fluctuation in energy. for higher plasma gas flow rate, arc voltage also becomes higher. as the gas flow rate increases, more energy is needed to ionize the gas, therefore the arc voltage should be higher. the kerf is narrower at the top, it widens at the middle, and again becomes narrower at the bottom, making heat distribution along the cut to be irregular. during pam operation, dross formation at the bottom of the workpiece needs to be minimized while controlling the corresponding process parameters. at low speed, input energy to the workpiece is high, causing melting of more materials. dross is formed when adequate force of the plasma jet is not available. to obtain a dross-free cut surface, plasma force and energy input to the workpiece need to be balanced properly. plasma power increases with plasma gas flow rate and arc current. to achieve a square cut of narrow kerf with minimal dross, the decision model can efficiently predict the tentative ranges of the process parameters (mittal & mahajan, 2018). the same parametric combination for the pam process is also well derived by the past researchers (ramakrishnan et al., 2018; prasad & chakraborty, 2018; chakraborty et al., 2020). 5. conclusions in this paper, an attempt is made to design and develop an intelligent decision model in vbasic so as to help the concerned process engineers in the domain of ntm processes. based on the availability of a particular ntm process, and selected workpiece and shape feature combination, it can identify values of different responses for a given set of parametric combinations. on the other hand, it has also the capability of predicting the tentative settings of different ntm process parameters while meeting the specified values of a given set of responses. in this system, the decision making procedure is based on an exhaustive set of if-then rules, and it consists of all the possible combinations of different ntm processes, work materials and shape features. it is easy to operate as the graphical user interface continuously interacts with the end users restricting them to commit any error. it has also the flexibility to cater any combination of ntm process, work material and shape feature. it warns the end user when a particular machining operation cannot be performed by a specific ntm process. the developed decision model assists the process engineers and designers to efficiently identify the technically feasible ntm processes in the early design and machining stages, enabling in developing the required product functionalities and appearance with the feasible processes in mind, while utilizing the process characteristics more effectively. after the detailed design is complete, the feasible processes identified in the earlier steps can be reevaluated, reassessing their technical feasibilities for manufacturing the designed product. the design can also be modified accordingly, if needed, to ensure manufacturability of the product. the main advantage of this decision model is that it does not require any in-depth technical knowledge regarding the applicability of the ntm processes. it also acts as an expert system to ease out and automate the ntm process selection procedure. this decision model has also some limitations. firstly, it does not take into account the presently available hybrid machining and additive manufacturing processes. moreover, it is developed based on a static database. it would be worth investigating the possibility of integrating the decision model into the ‘cloud’ under the industry 4.0 context, allowing prompt feedback and rapid update. it is also assumed that the developed decision model has no maintenance and operation costs. it lacks the creative responses of the human experts, and is also not able to explain the logic and development of an intelligent decision model for non-traditional machining processes 211 reasoning behind a decision to the end user. it opens opportunities to include micromachining, hybrid machining and additive manufacturing technology selection modules as well as improving selection results while incorporating more selection criteria and work materials in the model. it is expected that the developed model would be well accepted by the manufacturing industries for arriving at the prompt ntm process selection decisions. it can also be implemented in a group decision making environment involving opinions of different process engineers having varying background knowledge and expertise for more pragmatic results. its capability, reach and usability may further be enhanced while making it entirely web-based to become accessible to its end users through an internet network. author contributions: each author has participated and contributed sufficiently to take public responsibility for appropriate portions of the content. funding: this research received no external funding. conflicts of interest: the authors declare no conflicts of interest. references adalarasan, r., santhanakumar, m., & rajmohan, m. (2015). application of grey taguchi-based response surface methodology (gt-rsm) for optimizing the plasma arc cutting parameters of 304l stainless steel. international journal of advanced manufacturing technology, 78(5-8), 1161-1170. amalnik, m.s. (2019). design and manufacturing optimization of abrasive water jet machining using expert system. international journal of advanced design and manufacturing technology, 12(1), 101-114 azaryoon, a., hamidon, m., & radwan, a. (2015). an expert system based on a hybrid multi-criteria decision making method for selection of non-conventional machining processes. in applied mechanics and materials, trans tech publications ltd., 735, 4149. bhosale, s.b., pawade, r.s., & brahmankar, p.k. (2014) effect of process parameters on mrr, twr and surface topography in ultrasonic machining of alumina–zirconia ceramic composite. ceramics international, 40(8), 12831-12836. chakladar, n.d., & chakraborty, s. (2008). a combined topsis-ahp-method-based approach for non-traditional machining processes selection. proceedings of the institution of mechanical engineers, part b: journal of engineering manufacture, 222(12), 1613-1623. chakladar, n.d., das, r., & chakraborty, s. (2009). a digraph-based expert system for non-traditional machining processes selection. international journal of advanced manufacturing technology, 43(3), 226-237. chakraborty, s., & dey, s. (2006). design of an analytic-hierarchy-process-based expert system for non-traditional machining process selection. international journal of advanced manufacturing technology, 31(5-6), 490-500. chakraborty, s., & dey, s. (2007). qfd-based expert system for non-traditional machining processes selection. expert systems with applications, 32(4), 1208-1217. chakraborty and kumar/decis. mak. appl. manag. eng. 4 (1) (2021) 194-214 212 chakraborty, s., dandge, s.s., & agarwal, s. (2020). non-traditional machining processes selection and evaluation: a rough multi-attributive border approximation area comparison approach. computers & industrial engineering, 139, 106-201. chandraseelan, e.r., jehadeesan, r., & raajenthiren, m. (2008). web-based knowledge based system for selection of non-traditional machining processes. malaysian journal of computer science, 21(1), 45-56. chatterjee, p., mondal, s., boral, s., banerjee, a., & chakraborty, s. (2017). a novel hybrid method for non-traditional machining process selection using factor relationship and multi-attributive border approximation method. facta universitatis, series: mechanical engineering, 15(3), 439-456. das, m.k., kumar, k., barman, t.k., & sahoo, p. (2014). optimization of process parameters in plasma arc cutting of en31 steel based on mrr and multiple roughness characteristics using grey relational analysis. procedia materials science, 5, 15501559. datta, s., biswal, b.b., & mahapatra, s.s. (2017). a novel satisfaction function and distance-based approach for machining performance optimization during electrodischarge machining on super alloy inconel 718. arabian journal for science and engineering, 42(5), 1999-2020. el-hofy, h. (2005). advanced machining processes: nontraditional and hybrid machining processes. mcgraw hill professional, usa. gopalakannan, s., senthivelan, t., & ranganathan, s. (2012). modeling and optimization of edm process parameters on machining of al7075-b4c mmc using rsm, procedia engineering, 38(1), 685-690. jain, v.k. (1980). advanced machining processes. allied publishers pvt. ltd., india. karande, p., & chakraborty, s. (2012). application of promethee-gaia method for non-traditional machining processes selection. management science letters, 2(6), 2049-2060. kataria, r., kumar, j., & pabla, b.s. (2017). ultrasonic machining of wc-co composite material: experimental investigation and optimization using statistical techniques. proceedings of the institution of mechanical engineers, part b: journal of engineering manufacture, 231(5), 867-880. khandekar, a.v., & chakraborty, s. (2016). application of fuzzy axiomatic design principles for selection of non-traditional machining processes. international journal of advanced manufacturing technology, 83(1-4), 529-543. kiyak, m., & çakır, o. (2007). examination of machining parameters on surface roughness in edm of tool steel. journal of materials processing technology, 191(1-3), 141-144. kumar, j., & khamba, j.s. (2010). multi-response optimisation in ultrasonic machining of titanium using taguchi's approach and utility concept. international journal of manufacturing research, 5(2), 139-160. lin, y.c., & lee, h.s. (2008). machining characteristics of magnetic force-assisted edm. international journal of machine tools and manufacture, 48(11), 1179-1186. development of an intelligent decision model for non-traditional machining processes 213 madić, m., petković, d., & radovanović, m. (2015a). selection of non-conventional machining processes using the ocra method. serbian journal of management, 10(1), 61-73. madić, m., radovanović, m., & petković, d. (2015b). non-conventional machining processes selection using multi-objective optimization on the basis of ratio analysis method. journal of engineering science and technology, 10(11), 1441-1452. mittal, s., & mahajan, m.d. (2018). multi-response parameter optimization of cnc plasma arc machining using taguchi methodology. industrial engineering journal, 11(12), 1550-1559. pandey, p.c., & shan, h.s. (1980). modern machining processes. tata mcgraw-hill education, india. prasad, k., & chakraborty, s. (2018). a decision guidance framework for nontraditional machining processes selection. ain shams engineering journal, 9(2), 203214. rajurkar, k.p., hadidi, h., pariti, j., & reddy, g.c. (2017). review of sustainability issues in non-traditional machining processes. procedia manufacturing, 7, 714-720. ramakrishnan, h., balasundaram, r., ganesh, n., & karthikeyan, n. (2018). experimental investigation of cut quality characteristics on ss321 using plasma arc cutting. journal of the brazilian society of mechanical sciences and engineering, 40(60), 1-11. ray, a. (2016). optimization of process parameters of green electrical discharge machining using principal component analysis (pca). international journal of advanced manufacturing technology, 87(5-8), 1299-1311. rohith, r., shreyas, b.k., kartikgeyan, s., sachin, b.a., umesha, k.r., & nanjundeswaraswamy, t.s. (2019). selection of non-traditional machining process. international journal of engineering research & technology, 8(11), 148-155. roy, m.k., ray, a., & pradhan, b.b. (2014). non-traditional machining process selection using integrated fuzzy ahp and qfd techniques: a customer perspective. production & manufacturing research, 2(1), 530-549. roy, m.k., ray, a., & pradhan, b.b. (2017). non-traditional machining process selection an integrated approach. international journal for quality research, 11(1), 71-94. saenz, d.c., castillo, n.g., romeva, c.r., & macia, j.l. (2015). a fuzzy approach for the selection of non-traditional sheet metal cutting processes. expert systems with applications, 42(15), 6147-6154. salonitis, k., & vatousianos, s. (2012). experimental investigation of the plasma arc cutting process. procedia cirp, 3(1), 287-292. sarkar, a., panja, s.c., das, d., & sarkar, b. (2015). developing an efficient decision support system for non-traditional machine selection: an application of moora and moosra. production & manufacturing research, 3(1), 324-342. singh, d., & shukla, r.s. (2020). development of firefly algorithm interface for parameter optimization of electrochemical-based machining processes. in applications of firefly algorithm and its variants, springer, singapore, 29-52. chakraborty and kumar/decis. mak. appl. manag. eng. 4 (1) (2021) 194-214 214 sugumaran, v., muralidharan, v., & hegde, b.k. (2010). intelligent process selection for ntm a neural network approach. international journal of industrial engineering research and development, 1(1), 87-96. talib, f., & asjad, m. (2019). prioritisation and selection of non-traditional machining processes and their criteria using analytic hierarchy process approach. international journal of process management and benchmarking, 9(4), 522-546. temuçin, t., tozan, h., valíček, j., & harničárová, m. (2013). a fuzzy based decision support model for non-traditional machining process selection. tehnicki vjesnik technical gazette, 20(5), 787-793. xu, w.j., fang, j.c., & lu, y.s. (2002). study on ceramic cutting by plasma arc. journal of materials processing technology, 129(1-3), 152-156. yurdakul, m., & cçogun, c. (2003). development of a multi-attribute selection procedure for non-traditional machining processes. proceedings of the institution of mechanical engineers, part b: journal of engineering manufacture, 217(7), 993-1009. yurdakul, m., & i̇ç, y.t. (2019). comparison of fuzzy and crisp versions of an ahp and topsis model for nontraditional manufacturing process ranking decision. journal of advanced manufacturing systems, 18(2), 167-192. yurdakul, m., i̇ç, y.t., & atalay, k.d. (2019). development of an intuitionistic fuzzy ranking model for nontraditional machining processes. soft computing, 24(1), 1-16. © 2018 by the authors. submitted for possible open access publication under the terms and conditions of the creative commons attribution (cc by) license (http://creativecommons.org/licenses/by/4.0/). plane thermoelastic waves in infinite half-space caused decision making: applications in management and engineering vol. 4, issue 1, 2021, pp. 153-173. issn: 2560-6018 eissn: 2620-0104 doi: https://doi.org/10.31181/dmame2104153m * corresponding author. e-mail addresses: zizovic@gmail.com (m. zizovic), dragan.pamucar@va.mod.gov.rs (d. pamucar), bole@ravangrad.net (b, miljkovic), karan972a@gmail.com (a, karan). multiple-criteria evaluation model for medical professionals assigned to temporary sars-cov-2 hospitals mališa zizovic 1, dragan pamucar 2*, boža d. miljković 3 and aleksandra r. karan 4 1 faculty of technical sciences in cacak, university of kragujevac, cacak, serbia 2 department of logistics, military academy, university of defence in belgrade, belgrade, serbia 3 faculty of education sombor, university of novi sad, sombor, serbia 4 general hospital "dr radivoj simonović", sombor, serbia received: 18 december 2020; accepted: 27 february 2021; available online: 13 march 2021. original scientific paper abstract: hospitals around the world, as health institutions with a key role in the health system, face problems while providing health services to patients with various types of diseases. currently, those problems are intensified due to the pandemic caused by sars-cov-2 virus. this pandemic has caused an extreme spread of the disease with constantly changing needs of patients which impacts the capacities and overall functioning of hospitals. in order to meet the challenge of the covid-19 (coronavirus disease2019) pandemic, health systems must adjust to new circumstances and establish separate hospitals exclusive for patients infected with sars-cov-2 virus. in the process of creating covid-19 hospitals, health systems face a shortage of medical professionals trained for work in covid-19 hospitals. using this as a starting point, this study puts forward a two-phase model for the evaluation and selection of nurses for covid-19 hospitals. each phase of the model features a separate multiple-criteria model. in the first phase, a multiple-criteria model with a dominant criterion is formed and candidates who meet the defined requirements are evaluated. in the second phase, a modified multiple-criteria model is formed and used to evaluate medical professionals who do not meet the requirements of the dominant criterion. by applying this model, two groups of medical professionals are defined: 1) medical professionals who completely meet the requirements for working in covid-19 hospitals and 2) medical professionals who require additional training. the criteria for evaluation of medical professionals in this multiple-criteria model are defined based on research conducted on medical professionals assigned to the covidmailto:zizovic@gmail.com mailto:zizovic@gmail.com mailto:dragan.pamucar@va.mod.gov.rs mailto:bole@ravangrad.net mailto:karan972a@gmail.com zizovic et al./decis. mak. appl. manag. eng. 4 (1) (2021) 153-173 154 19 crisis response team during the covid-19 pandemic in the republic of serbia. the model was tested on a real example of evaluating medical professionals assigned to the covid-19 hospital in sombor. the model for evaluating medical professionals presented in this paper can help decision makers in hospitals and national policy makers to determine the readiness level of hospitals for working in the conditions of the covid-19 pandemic, as well as underline the areas in which hospitals are not ready to meet the challenges of the pandemic. key words: covid-19 pandemic; health care service; multicriteria decision making. 1. introduction covid-19 (coronavirus disease2019) is a disease caused by a novel virus from the group of coronaviruses, first isolated in 1962. since then, it is known that some viruses from the coronavirus group, infect only certain animals, some humans, while some can breach the barrier between species, causing states ranging from a mild cold to severe acute respiratory syndrome (sars). a new, so far unknown coronavirus, sars-cov-2, the cause if covid-19 disease, belongs to the same subgroup as merscov and sars-cov, and was first detected in the chinese city of wuhan, in the province of huubei, the ground zero of the epidemic. who (world health organisation) declared a global pandemic in march 2020 and started a research mission into it. the covid-19 pandemic came as a surprise and brought with it many challenges to health systems all over the world, including our own. the immediate demand for staff training, repurposing structures, equipment acquisition, communication management, continuous supervision, healthcare availability to special groups of population under new conditions etc. has created numerous operational, logistical, organizational and ethical tasks for managers, medical professionals and associates. success and weaknesses of the health systems during the pandemic, is graded by the global health safety index, used for evaluating readiness in prevention, detection, fast response to high-risk environments, and following international protocols in new conditions. due to the specifics of the covid-19 pandemic, continuous analysis of all advantages and weaknesses of health systems and planning based on adaptive models of operation and providing services are necessary because the immediate demands and the unpredictability of the covid-19 infection requires agility and flexibility of all services in regards to timely responses, especially in the case of medical professionals. previous research on evaluating the readiness of hospitals and medical professionals for performance in crises and disasters is very limited. this applies especially to the application of multiple-criteria decision models in this field, which limited the analysis of already available literature. most authors consider the application of two concepts for solving this problem: 1) application of multiple-criteria decision making for evaluating readiness of hospitals in crises and 2) application of research based on surveys and statistical analysis. nekoie-moghadam et al. (2016) present a comprehensive review of literature with different methodologies used for evaluating hospitals for work in crises. in their review, they considered the most important topics that relate to evaluating health systems, such as logistics, planning, human resources, communication, management and control, training, evacuation, disaster recovery, coordination, transport, safety (fallah-aliabadi et al. 2020; verheul and duckers, 2020; nekoie-moghadam et al., 2016). of the 15 papers considered in multiple-criteria evaluation model for medical professionals assigned to temporary… 155 total by nekoie-moghadam et al. (2016), all consider the application of research based on statistical analysis of data collected in surveys. in some situations, authors used statistical methodologies, such as the delphi technique and similar tools (rezaei and mohebbi-dehnavi, 2019). a comprehensive review of literature that considers the application of statistical tools for hospital readiness analysis in crises is presented in fallah-aliabadi et al. (2020), verheul and duckers (2020), alruvaili et al. (2019) and nekoie-moghadam et al. (2016). in their study tabatabaei and abbasi (2016) carried out risk assessment during crises based on the hospital safety index. the safety index was defined based on comprehensive research with statistical data processing. samsuddin et al. (2018) identified key factors to determine the readiness level of hospitals for work in crises. the results have demonstrated that human resources, their training and ability to adapt in a timely fashion, are the crucial factor. marzaleh et al. (2019.) put forward an approach where the delphi technique is used to evaluate the readiness of emergency services in hospitals for working in crises. their study identified 31 criteria grouped in 3 clusters. the results demonstrate that training of medical professionals has the highest priority. a similar approach for analyzing hospital capacities using the delphi technique was demonstrated by shabanikiya et al. (2019.). however, aside from statistical processing of factor weight used for evaluating hospitals in crises, previous research also contains a number of papers based on multiple-criteria techniques. for example, mulyasari et al. (2013) have applied multiple-criteria techniques for ranking eight hospitals in iran. the ranking was carried out using factors grouped in four clusters, based on which hospital structural and functional readiness in crises was evaluated. following this study, hosseini et al. (2019.) developed a model for ranking hospitals based on the topsis (technique for order of preference by similarity to ideal solution) multiple-criteria technique. the study identifies 21 factors grouped in 4 clusters. however, this study used direct evaluation of survey participants instead of subjective/objective methods for determining weight coefficients of criteria. also, ortiz-barrios et al. (2017) demonstrate the possibility of applying analytical multiple-criteria approach for analyzing the readiness of particular wards/clinics in hospitals during crises. the approach is a hybrid model that consists of: 1) applying the ahp (analytic hierarchy process) and the dematel (decision making trial and evaluation laboratory) methods for determining weight coefficients of criteria and their mutual relationships and 2) applying the topsis method for evaluating hospital capacities. unlike previously listed studies, ortiz-barrios et al. (2017), in addition to defining weight coefficients of criteria, also put forward a methodology for defining mutual relationships and impact of factors used for evaluation. following ortiz-barrios et al. (2017), roy et al. (2018) identified key factors for evaluating hospital capacities using the dematel method. however, unlike ortiz-barrios et al. (2017), roy et al. (2018) used rough numbers to exploit the lack of certainty and precision found in expert preferences. considering that the crisis caused by the covid-19 pandemic is still ongoing, there is no research that considers the problem of evaluating the training of medical professionals for working in covid-19 hospitals. there is a limited number of papers that consider the application of multiple-criteria tools for solving problems caused by the covid-19 pandemic. sarkar (2020) maps the areas susceptible to covid-19 infection in bangladesh. the ahp method, in conjunction with gis (geographic information system) spatial analysis were used for area mapping. sangiorgio and parisi (2020) used artificial neural network and gis for mapping covid-19 infection risks in urban zones in italy. their study showed spatial analyses of a total of 257 city zizovic et al./decis. mak. appl. manag. eng. 4 (1) (2021) 153-173 156 districts. nardo et al. (2020) demonstrated the application of multi-criteria decision analysis for determining weights for eleven criteria in order to prioritize covid-19 non-critical patients for admission to hospitals in healthcare settings with limited resources. yildirim et al. (2020) evaluated the available covid-19 treatment options in hospitals. for their evaluation, they used modified promethee (preference ranking organization method for enrichment of evaluations) and vikor (visekriterijumska opitimizacija i kompromisno resenje in serbian) technique by applying fuzzy numbers. as we can see from the above-mentioned studies, there are numerous approaches relating to evaluating readiness of hospitals in crises. most papers contribute by putting forward methodological frames that require application of research featuring statistical processing of data collected via surveys. on the other hand, based on the above-mentioned literature, we can see the significance of multiple-criteria techniques for researching topics pertaining to evaluating readiness of hospital capacities in crises. we can note a wide spectrum of multiple-criteria techniques used in literature and applied in various fields. however, the number of papers that apply these methods for evaluating hospital capacities is limited. furthermore, the number of papers that consider the application of multiple-criteria methods for evaluating hospitals and medical professionals in the conditions of the covid-19 pandemic is especially limited. therefore, it is the aim of this paper to develop a multiple-criteria model for selection and evaluation of medical professionals for working in covid-19 hospitals in the conditions of a pandemic. the suggested model deals with assessing the training of medical professionals for working in the conditions of the covid-19 pandemic. furthermore, the suggested model has the following advantages that improve the literature relating to the application of multiple-criteria techniques in the field of healthcare: 1. an original, multiple-criteria methodology for evaluating propriety of medical professionals for working in covid-19 hospitals has been developed. the demonstrated methodology is conducted in two phases. a separate multiplecriteria model has been developed for each phase. the criteria and criteria value scales have been defined following months of research with the participation of medical professionals from the covid-19 crisis response team of the republic of serbia. 2. the demonstrated methodology is not limited to application in the healthcare field and is applicable in other fields due to its adaptability 3. the suggested methodology provides a new, clear and concise frame for resource management. in order to illustrate the effectiveness of the suggested methodology, an empirical study of the application of this multiple-criteria methodology is presented in the paper. 4. the approach presented in this paper can solve the problem of selection of medical professionals in the conditions of the covid-19 pandemic in a systematic and analytical way. the developed model was implemented and tested in a case study of the covid-19 hospital in sombor (serbia). 5. the model for evaluating medical professionals presented in this paper can help decision makers in hospitals and national policy makers to determine the readiness level of hospitals for working in the conditions of the covid-19 pandemic, as well as underline the areas in which hospitals are not ready to meet the challenges of the pandemic. the paper is structured into four sections. after the introduction that presents the problem and analyzes the existing literature, the second section mathematically formulates the multiple-criteria model for evaluating medical professionals in the multiple-criteria evaluation model for medical professionals assigned to temporary… 157 conditions of the covid-19 pandemic. the third section presents the implementation of the multiple-criteria model on a real example of evaluating medical professionals in the covid-19 hospital in sombor, the republic of serbia. the fourth section of the paper presents the conclusion and directions for future research. 2. multiple-criteria evaluation model for medical professionals assigned to covid hospitals let us assume a multiple-criteria model with defined criteria  1 2, ,...,j nc c c c where n stands for the number of criteria used in the multiple-criteria model. also let us assume a set of alternatives  1 2, ,...,i na a a a where m stands for the number of alternatives to be ranked in the model. we can define the decision matrix mxn whose elements ij a stand for value of j criterion for i alternative. table 1. decision matrix 1 c 2 c n c 1 a 11 a 12 a 1n a 2 a 21 a 22 a 2n a m a 1m a 2m a mn a all criteria of the set  1 2, ,...,j nc c c c were assigned weight coefficients  1 2, ,...,j nw w w w that meet the requirement of 1 1 n j j w   . let sc stand for the dominant criterion of the set  1 2, ,...,j nc c c c . in case of the dominant criterion not being met, then the alternative that does not meet it cannot be considered a solution to the problem. ranking of alternatives in the conditions of meeting or partially meeting the dominant criterion s c was considered by žižović et al. (2019). in multiplecriteria models with a dominant criterion, there is a problem where it is necessary to choose more than one alternative, for example p alternative (p  yv a , we select the candidate  xv a . 3. results this chapter considers the case study of organizing hospitals specially prepared for the admission of patients suffering from a sars-cov-2 infection. such hospitals in the republic of serbia were organized as separate parts of existing hospital capacities or were repurposed as entire hospitals to admit only patients diagnosed with covid19 during the crisis. simultaneously, there was a need for medical professionals trained for working in the newly formed medical institutions. there was a need for priority treatment of patients suffering from a largely unknown, high-risk disease, with too few qualified medical professionals in the field. additionally, there was a need for selection of qualified medical professionals to carry out the listed duties, as well as simultaneously train other medical professionals that, at the moment, were not trained for assignment to covid-19 hospitals. zizovic et al./decis. mak. appl. manag. eng. 4 (1) (2021) 153-173 162 this study presents the application of the model for evaluating medical professionals for working in covid-19 hospitals on the example of the covid-19 hospital in sombor, republic of serbia. the study features part of the general hospital in sombor repurposed for treatment of covid-19 patients. eight criteria were identified that were used for evaluating nurses in two phases of the model. criteria and criteria value scales for evaluation of medical professionals were defined based on a survey with members of the covid-19 crisis response team and with medical doctors who participated in the operation of the covid-19 hospital in sombor. all candidates for assignment in covid-19 hospitals were psychologically tested and interviewed by teams from the covid-19 crisis response team (teams of medical doctors). data obtained in this way served to form evaluations for criteria. below, we present eight criteria with their value scales. the presented criteria are applied for evaluation in the first phase of the model. c1 experience in working with infectious and pulmonary diseases in hospital treatment (c1). value scale:  nurse with work experience at an infectious or pulmonary diseases ward in hospital treatment 5 points;  nurse with work experience at internal medicine ward in hospital treatment 4 points;  nurse with work experience at a different ward in hospital treatment 2 points;  nurse with no work experience in hospital treatment 1 point. c2 professional training of the candidate for working with covid-19 diagnosed (covid+) patients in “covid-19” zone:  professional training for immediate work with covid-19+ patients, use of personal protective equipment and materials in covid-19 conditions, knowledge of work organization in covid-19 conditions, training for movement through safety zones 5 points  professional training for immediate work with covid-19+ patients, use of personal protective equipment and materials in covid-19 conditions, knowledge of work organization in covid-19 conditions 4 points;  professional training for immediate work with covid-19+ patients, use of personal protective equipment and materials in covid-19 conditions 3 points;  professional training for immediate work with covid-19+ patients 2 points;  no professional training 0 points. c3 health risk of the candidate:  no health risk 10 points;  diseases and injuries that do not significantly affect ability 8 points;  physical injuries that partially affect mobility 7 points;  lower risk chronic diseases in covid-19 conditions 6 points;  single parent of a child up to 12 years old 3 points;  parent of a child up to 3 years old 2 points;  age (over 60) 1 point;  chronic diseases like: diabetes, psychosomatic diseases, autoimmune diseases, malignancy, diseases or treatment with a negative influence on the immune system, pregnancy 0 points. c4 physical evaluation of candidates for work in difficult working conditions and in shifts: multiple-criteria evaluation model for medical professionals assigned to temporary… 163  very capable 5 points;  capable 4 points;  partially capable 3 points;  barely capable 1 point. c5 motivation of candidates for working in covid-19 conditions:  very motivated 5 points;  motivated 4 points;  somewhat motivated 2 points;  barely motivated 1 point;  not motivated 0 points. c6 availability of candidate to the workplace:  easily available 5 points;  available 4 points;  poorly available 3 points;  very poorly available 1 point. c7 candidate reliability:  very reliable 5 points;  reliable 4 points;  somewhat reliable 2 points;  unreliable 1 point. c8 candidate’s vaccination history:  mandatory vaccination (with bcg+polio) 5 points;  mandatory vaccination (without bcg/polio) 4 points;  basic vaccination – 2 points;  none 1 point. after defining the criteria, below we present the application of the model on the selection of medical professionals. first phase: defining an additional criterion from the set of criteria and forming a multiple-criteria model with a dominant criterion. step 1: from the defined set of criteria  1 2 8, ,...,jc c c c for evaluating medical professionals, criterion 1 c was defined as the dominant criterion. all candidates who do not meet the conditions defined by criterion 1 c move on to the second phase of evaluation, while ranking of candidates who meet criterion 1 c is done in the first phase. step 2: determining criteria weight coefficients jw ( 1, 2,...,8j  ). as noted above, for determining criteria weight coefficients we used the ndsl model (žižović et al., 2020). determining criteria weight coefficients using the ndsl model is presented below in steps 2.1 2.5. step 2.1: determining the most important criterion from the criteria set  1 2 8, ,...,jc c c c and ranking criteria. as c1 was defined as the dominant criterion, c1 is also the most important criterion in the set jc . based on evaluations of experts, the criteria from the set jc were ranked as follows: c1>c7>c3>c4>c5>c6>c2>c8. step 2.2: grouping criteria into significance levels. the criteria were grouped into four levels, as follows: level 1 l : 1c , level 2 l : 3, 4, 5, 7c c c c , zizovic et al./decis. mak. appl. manag. eng. 4 (1) (2021) 153-173 164 level 3 l : 2, 6, 8c c c . step 2.3: based on relations for defining border values of the criteria ( ) the values i  were defined for criteria relating to significance levels: 5 1 2 1 7 3 4 6 2 83 0 17; 19; 20; 24; 26; 28; 29. : : : level l level l level l                 the values of border values of criteria significance ( i  ) were defined for the value 50n  . step 2.4: criteria significance functions ( ) i f c , 1, 2,...,8i  , were defined based on relation    ( ) /i i if c n n   : 1 3 7 4 5 6 2 8 ( ) 1.000; ( ) 0.493; ( ) 0.449; ( ) 0.429; ( ) 0.351; ( ) 0.316; ( ) 0.282; ( ) 0.266. f c f c f c f c f c f c f c f c         step 2.5: based on defined values ( ) i f c , 1, 2,...,8i  we arrive at values of criteria weight coefficients, table 2. table 2. criteria weight coefficients criterion wj c1 0.279 c2 0.079 c3 0.137 c4 0.119 c5 0.098 c6 0.088 c7 0.125 c8 0.074 step 3: forming the preliminary decision matrix. the crisis response team carried out the evaluation of medical professionals from the pulmonology and infectious wards of the general hospital in sombor. these two wards counted in total 43 nurses who were candidates for working in the covid-19 hospital, i.e. for working in the “red” and “orange” zones. a selection of 29 nurses was considered of the available 43 candidates. based on the defined evaluation criteria and available number of medical professionals, the preliminary decision matrix was formed 43 8 ij x x      , table 3. table 3. preliminary decision matrix – first phase alt c1 c2 c3 c4 c5 c6 c7 c8 a1 5 5 10 5 4 4 5 5 a2 2 5 10 5 5 4 5 5 a3 5 5 10 5 5 4 4 5 a4 1 2 0 1 1 5 1 5 a5 5 5 10 4 5 5 5 4 a6 5 5 10 3 5 5 5 5 a7 5 5 7 3 5 3 5 5 multiple-criteria evaluation model for medical professionals assigned to temporary… 165 alt c1 c2 c3 c4 c5 c6 c7 c8 a8 2 5 7 5 2 4 4 5 a9 5 5 6 4 4 5 5 2 a10 5 5 8 4 5 5 5 5 a11 5 5 10 5 2 4 2 5 a12 5 5 2 5 4 4 5 5 a13 1 5 10 5 5 5 5 5 a14 5 5 6 5 0 3 2 5 a15 5 5 10 4 5 3 5 5 a16 1 0 0 1 0 5 5 5 a17 5 5 2 5 2 5 2 5 a18 1 5 3 5 2 1 4 5 a19 5 5 10 5 5 5 5 2 a20 5 5 6 3 2 3 4 5 a21 5 5 10 5 5 5 5 5 a22 1 4 8 4 2 5 5 5 a23 2 4 7 3 0 5 5 4 a24 2 3 7 3 1 5 5 5 a25 2 3 7 3 1 5 5 5 a26 5 5 6 4 5 3 4 5 a27 5 3 1 1 1 5 5 5 a28 5 2 1 3 1 5 5 2 a29 2 3 1 1 1 5 5 2 a30 5 5 2 3 2 1 5 5 a31 2 5 10 5 5 5 5 5 a32 5 5 7 3 5 5 1 4 a33 5 5 10 5 5 5 5 4 a34 2 5 8 5 5 1 4 4 a35 5 5 10 5 4 5 2 5 a36 5 5 10 5 5 5 5 5 a37 5 5 10 5 5 4 4 5 a38 2 5 6 4 4 3 4 5 a39 1 5 10 5 5 5 5 5 a40 2 5 10 5 5 4 5 5 a41 2 5 6 4 2 5 4 5 a42 5 5 10 5 5 1 5 5 a43 5 5 10 5 5 5 5 5 step 4: normalization of elements of the preliminary decision matrix. since all the considered criteria fall under the max type (higher value is better), for the normalization of values we used the expression (1). the elements of the normalized matrix are shown in table 4. table 4. normalized decision matrix – first phase alt. c1 c2 c3 c4 c5 c6 c7 c8 a1 1.0 1.0 1.0 1.0 0.8 0.8 1.0 1.0 a2 0.4 1.0 1.0 1.0 1.0 0.8 1.0 1.0 a3 1.0 1.0 1.0 1.0 1.0 0.8 0.8 1.0 a4 0.2 0.4 0.0 0.2 0.2 1.0 0.2 1.0 a5 1.0 1.0 1.0 0.8 1.0 1.0 1.0 0.8 a6 1.0 1.0 1.0 0.6 1.0 1.0 1.0 1.0 zizovic et al./decis. mak. appl. manag. eng. 4 (1) (2021) 153-173 166 alt. c1 c2 c3 c4 c5 c6 c7 c8 a7 1.0 1.0 0.7 0.6 1.0 0.6 1.0 1.0 a8 0.4 1.0 0.7 1.0 0.4 0.8 0.8 1.0 a9 1.0 1.0 0.6 0.8 0.8 1.0 1.0 0.4 a10 1.0 1.0 0.8 0.8 1.0 1.0 1.0 1.0 a11 1.0 1.0 1.0 1.0 0.4 0.8 0.4 1.0 a12 1.0 1.0 0.2 1.0 0.8 0.8 1.0 1.0 a13 0.2 1.0 1.0 1.0 1.0 1.0 1.0 1.0 a14 1.0 1.0 0.6 1.0 0.0 0.6 0.4 1.0 a15 1.0 1.0 1.0 0.8 1.0 0.6 1.0 1.0 a16 0.2 0.0 0.0 0.2 0.0 1.0 1.0 1.0 a17 1.0 1.0 0.2 1.0 0.4 1.0 0.4 1.0 a18 0.2 1.0 0.3 1.0 0.4 0.2 0.8 1.0 a19 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0.4 a20 1.0 1.0 0.6 0.6 0.4 0.6 0.8 1.0 a21 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 a22 0.2 0.8 0.8 0.8 0.4 1.0 1.0 1.0 a23 0.4 0.8 0.7 0.6 0.0 1.0 1.0 0.8 a24 0.4 0.6 0.7 0.6 0.2 1.0 1.0 1.0 a25 0.4 0.6 0.7 0.6 0.2 1.0 1.0 1.0 a26 1.0 1.0 0.6 0.8 1.0 0.6 0.8 1.0 a27 1.0 0.6 0.1 0.2 0.2 1.0 1.0 1.0 a28 1.0 0.4 0.1 0.6 0.2 1.0 1.0 0.4 a29 0.4 0.6 0.1 0.2 0.2 1.0 1.0 0.4 a30 1.0 1.0 0.2 0.6 0.4 0.2 1.0 1.0 a31 0.4 1.0 1.0 1.0 1.0 1.0 1.0 1.0 a32 1.0 1.0 0.7 0.6 1.0 1.0 0.2 0.8 a33 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0.8 a34 0.4 1.0 0.8 1.0 1.0 0.2 0.8 0.8 a35 1.0 1.0 1.0 1.0 0.8 1.0 0.4 1.0 a36 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 a37 1.0 1.0 1.0 1.0 1.0 0.8 0.8 1.0 a38 0.4 1.0 0.6 0.8 0.8 0.6 0.8 1.0 a39 0.2 1.0 1.0 1.0 1.0 1.0 1.0 1.0 a40 0.4 1.0 1.0 1.0 1.0 0.8 1.0 1.0 a41 0.4 1.0 0.6 0.8 0.4 1.0 0.8 1.0 a42 1.0 1.0 1.0 1.0 1.0 0.2 1.0 1.0 a43 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 step 5: evaluation and selection of medical professionals who meet the dominant criterion 1 c . medical professionals who have the value of the dominant criterion 1 4c  enter the evaluation process in the first phase of selection. border value for the dominant criterion is defined based on the evaluation of experts. the remaining candidates who do not meet the requirement move on to the second phase of selection. based on the analysis of the preliminary decision matrix (table 3) we see that 26 candidates meet the requirement of 1 4c  . by applying the expression (3) we evaluate and rank the candidates, table 5. multiple-criteria evaluation model for medical professionals assigned to temporary… 167 table 5. grades of candidates who meet the requirement of the dominant criterion evaluation function v(ai) rank v(a21)=0.279+λ 0.72 0.3690 1 v(a36)=0.279+λ 0.72 0.3690 1 v(a43)=0.279+λ 0.72 0.3690 1 v(a33)=0.279+λ 0.705 0.3672 4 v(a1)=0.279+λ 0.683 0.3644 5 v(a5)=0.279+λ 0.681 0.3642 6 v(a3)=0.279+λ 0.677 0.3637 7 v(a37)=0.279+λ 0.677 0.3637 7 v(a19)=0.279+λ 0.676 0.3635 9 v(a6)=0.279+λ 0.672 0.3631 10 v(a10)=0.279+λ 0.669 0.3626 11 v(a15)=0.279+λ 0.661 0.3616 12 v(a42)=0.279+λ 0.65 0.3602 13 v(a35)=0.279+λ 0.625 0.3572 14 v(a7)=0.279+λ 0.596 0.3535 15 v(a26)=0.279+λ 0.581 0.3517 16 v(a9)=0.279+λ 0.577 0.3512 17 v(a12)=0.279+λ 0.573 0.3507 18 v(a11)=0.279+λ 0.569 0.3501 19 v(a32)=0.279+λ 0.517 0.3436 20 v(a20)=0.279+λ 0.499 0.3413 21 v(a17)=0.279+λ 0.477 0.3386 22 v(a14)=0.279+λ 0.457 0.3361 23 v(a30)=0.279+λ 0.434 0.3332 24 v(a27)=0.279+λ 0.392 0.3279 25 v(a28)=0.279+λ 0.379 0.3264 26 v(a31)=0.112+λ 0.288 0.1476 27 v(a2)=0.112+λ 0.281 0.1467 28 v(a40)=0.112+λ 0.281 0.1467 28 v(a34)=0.112+λ 0.233 0.1407 30 v(a8)=0.112+λ 0.231 0.1405 31 v(a38)=0.112+λ 0.225 0.1397 32 v(a41)=0.112+λ 0.223 0.1395 33 v(a24)=0.112+λ 0.209 0.1377 34 v(a25)=0.112+λ 0.209 0.1377 34 v(a23)=0.112+λ 0.201 0.1367 36 v(a29)=0.112+λ 0.139 0.1290 37 v(a13)=0.056+λ 0.144 0.0738 38 v(a39)=0.056+λ 0.144 0.0738 38 v(a22)=0.056+λ 0.119 0.0707 40 v(a18)=0.056+λ 0.094 0.0675 41 v(a16)=0.056+λ 0.062 0.0636 42 v(a4)=0.056+λ 0.052 0.0624 43 zizovic et al./decis. mak. appl. manag. eng. 4 (1) (2021) 153-173 168 an example of defining a candidate grade  1v a , on the condition that 0.125  (expression (3)), is given below:   1.0 0.279 1.0 0.125(1.0 0.079 1.0 0.137 ... 1.0 0.074) 0.364iv a            in a similar way, the remaining candidates’ grades were attained, as shown in table 5. the mandatory criteria were met by 26 candidates, who were ranked in table 5. since 29 candidates were needed for the covid-19 hospital, the remaining 17 candidates were ranked in the second phase of the model with the aim of selecting 3 additional, most appropriate candidates. second phase: forming a modified multiple-criteria model. in the modified multiple-criteria model, we adjust the starting set of criteria. criteria c1 and c2 are eliminated and in their place, we insert new ones: * 1 c evaluation of speed of acquiring knowledge and skills for working in covid-19 institutions:  very quick and safe start of work activities – 5 points;  sufficiently quick and safe start of work activities – 4 points;  satisfactorily quick and safe start of work activities – 3 points;  slow but safe start of work activities – 2 points;  slow and barely safe start of work activities – 1 point. * 2 c school grades in subjects close to the needs of the position. for this criterion, an average grade was taken, from the interval [2, 5]. the remaining criteria were unchanged, same as in the first phase, i.e. * 1 c , * 2 c , * 3 3 c c , * 4 4 c c , * 5 5 c c , * 6 6 c c , * 7 7 c c , * 8 8 c c . this forms the final set of criteria used for the evaluation of medical professionals in the second phase  * * * *1 2 8, ,...,jc c c c . the modified multiple-criteria model with a newly defined criteria set * j c ( 1, 2,...,8j  ) is conducted in four steps presented below. step 1: defining the criteria set  * * * *1 2 8, ,...,jc c c c and calculation of newly formed set of criteria weight coefficients * j w ( 1, 2,...,8j  ). similar to the first phase (step 2), weight coefficients are defined using the ndsl model (žižović et al., 2020). step 1.1: based on the evaluation of experts, the criteria from the set  * * * *1 2 8, ,...,jc c c c were ranked as follows: c1*>c7*>c4*>c6*>c3*>c8*>c5*>c2*. step 1.2: the criteria were grouped into sets of four levels, as follows: level 1 l : * * *1 7 4, ,c c c , level 2 l : * *6 3,c c , level 3 l : * *8 5,c c , level 4 l : *2c . step 1.3: based on relations for defining border values of criteria significance ( i  ) we define the values i  for criteria in significance levels: 2 1 7 4 6 3 8 5 2 1 3 4 0; 5; 10 18; 20; 26; 28; 32. : : : : level l level l level l level l                 multiple-criteria evaluation model for medical professionals assigned to temporary… 169 the values of border values of criteria significance ( i  ) were defined for the value 50n  . steps 1.4 and 1.5: based on the defined values * ( ) i f c , 1, 2,...,8i  we arrive at the values of criteria weight coefficients, table 6. table 6. criteria weight coefficients criterion * j w c1* 0.238 c2* 0.052 c3* 0.102 c4* 0.158 c5* 0.067 c6* 0.112 c7* 0.194 c8* 0.077 step 2: forming the preliminary decision matrix. since the minimum criteria were not met by 17 of 43 candidates, we form the preliminary decision matrix of rank , table 7. table 7. preliminary decision matrix second phase alt. c1* c2* c3* c4* c5* c6* c7* c8* a2 5 5 10 5 5 4 5 5 a4 5 2 0 1 1 5 1 5 a8 2 5 7 5 2 4 4 5 a13 2 5 10 5 5 5 5 5 a16 4 0 0 1 0 5 5 5 a18 5 5 3 5 2 1 4 5 a22 4 4 8 4 2 5 5 5 a23 4 4 7 3 0 5 5 4 a24 2 3 7 3 1 5 5 5 a25 5 3 7 3 1 5 5 5 a29 4 3 1 1 1 5 5 2 a31 4 5 10 5 5 5 5 5 a34 2 5 8 5 5 1 4 4 a38 4 5 6 4 4 3 4 5 a39 4 5 10 5 5 5 5 5 a40 4 5 10 5 5 4 5 5 a41 5 5 6 4 2 5 4 5 steps 3 and 4: normalization of elements of the preliminary decision matrix is done by applying the expression (4). after the normalization of the decision matrix elements, by applying expression (6) we define the grade for each candidate. the final grades and candidate ranking are shown in table 8. zizovic et al./decis. mak. appl. manag. eng. 4 (1) (2021) 153-173 170 table 8. candidate grades after second evaluation phase alt. v(ai) rank v(a2) 0.9776 1 v(a31) 0.9524 2 v(a39) 0.9524 2 v(a40) 0.9300 4 v(a13) 0.8572 5 v(a22) 0.8498 6 v(a41) 0.8486 7 v(a25) 0.8318 8 v(a38) 0.7830 9 v(a23) 0.7658 10 v(a18) 0.7600 11 v(a8) 0.7252 12 v(a34) 0.6930 13 v(a24) 0.6890 14 v(a29) 0.6136 15 v(a16) 0.6050 16 v(a4) 0.5316 17 after the evaluation of candidates (table 8), three best ranked candidates were selected, and after completing a training cycle, were assigned to a covid-19 hospital. the training program is defined by the crisis response team. the remaining 14 candidates also completed the training program but are currently not assigned to the covid-19 hospital. they are available for assignment in case of assigned staff being removed from the team for self-isolation. self-isolation may be a consequence of accidental exposure (human error, breakdown of equipment, etc.) or disease. 5. conclusions management of human resources is a key segment that affects the efficacy of the health system of every country. this is especially obvious in crises, like the covid-19 pandemic. this is why it is necessary to efficiently manage human resources in hospitals, to reduce as much as possible the dangers caused by the covid-19 pandemic. as far as the authors are aware, there are no current models for considering the evaluation of medical professionals’ training for working in crises, so the motivation for a study such as this is logical. in this paper, we put forward a multiple-criteria model that allows decision makers in medical institutions and national crisis response teams to evaluate the training of medical professionals for working in the conditions of the covid-19 pandemic. for the needs of this multiple-criteria model, we defined criteria based on which we evaluate medical professionals. the criteria and criteria evaluation scales were defined after months of research with participation from health institution managers and members of the crisis response team of serbia. the developed multiple-criteria model is conducted in two phases. the first phase evaluates medical professionals according to one or more dominant criteria. medical professionals who meet the conditions defined in the first phase, meet the conditions necessary for working in a covid-19 hospital. medical professionals who do not meet the conditions defined in multiple-criteria evaluation model for medical professionals assigned to temporary… 171 the first phase, move on to the second phase of evaluation. after completing the second phase of evaluation, the staff who partially meet the conditions are identified and they undergo training for working in covid-19 hospitals. this methodology was applied to the example of the covid-19 hospital in sombor. the suggested methodology can be used for other decision problems, by adapting the criteria according to the nature of the decision problem. the basic advantage of this study is application, i.e. testing of the suggested methodology on objective data in the conditions of the covid-19 pandemic. this demonstrates, on a real example, all advantages of this methodology. future research should be directed towards implementing the suggested methodology in the conditions of uncertain input model parameters (ecer and pamucar, 2020). uncertainty in future research can be exploited by applying various uncertainty theories such as fuzzy theory or rough theory. author contributions: each author has participated and contributed sufficiently to take public responsibility for appropriate portions of the content. funding: this research received no external funding. conflicts of interest: the authors declare that there is no conflict of interest. references behzad, m., hashemkhani zolfani, s., pamucar, d., & behzad, m. (2020). a comparative assessment of solid waste performance management in the nordic countries based on bwm-edas. journal of cleaner production, 266 122008, https://doi.org/10.1016/j.jclepro.2020.122008. ecer, f., & pamucar, d. (2020). sustainable supplier selection: a novel integrated fuzzy best worst method (f-bwm) and fuzzy cocoso with bonferroni (cocoso'b) multicriteria model. journal of cleaner production, 266, 121981. https://doi.org/10.1016/j.jclepro.2020.121981. fallah-aliabadi, s., ostadtaghizadeh, a., ardalan, a., fatemi, f., khazai, b., & mirjalili, m. r. (2020). towards developing a model for the evaluation of hospital disaster resilience: a systematic review. bmc health services research, 20(1), 64-73. hosseini, s. m., bahadori, m., raadabadi, m., & ravangard, r. (2019). ranking hospitals based on the disasters preparedness using the topsis technique in western iran. hospital topics, 97(1), 23-31. marzaleh, m. a., rezaee, r., rezaianzadeh, a., rakhshan, m., hadadi, g., & peyravi, m. (2019). emergency department preparedness of hospitals for radiation, nuclear accidents, and nuclear terrorism: a qualitative study in iran. iranian red crescent medical journal, 21(5), 300–306. mulyasari, f., inoue, s., prashar, s., isayama, k., basu, m., srivastava, n., & shaw, r. (2013). disaster preparedness: looking through the lens of hospitals in japan. international journal of disaster risk science, 4(2), 89-100. nardo, p.d., gentilotti, e., mazzaferri, f., cremonini, e., hansen, p., goossens, h., & tacconelli, e. (2020). multi-criteria decision analysis to prioritize hospital admission zizovic et al./decis. mak. appl. manag. eng. 4 (1) (2021) 153-173 172 of patients affected by covid-19 in low-resource settings with hospital-bed shortage. international journal of infectious diseases, 98, 494-500. nekoie-moghadam, m., kurland, l., moosazadeh, m., ingrassia, p. l., della corte, f., & djalali, a. (2016). tools and checklists used for the evaluation of hospital disaster preparedness: a systematic review. disaster medicine and public health preparedness, 10(5), 781-788. ortiz‐barrios, m. a., aleman‐romero, b. a., rebolledo‐rudas, j., maldonado‐mestre, h., montes‐villa, l., de felice, f., & petrillo, a. (2017). the analytic decision‐making preference model to evaluate the disaster readiness in emergency departments: the adt model. journal of multi‐criteria decision analysis, 24(5-6), 204-226. pamucar, d., behzad, m., bozanic, d., & behzad, m. (2020c). decision making to support sustainable energy policies corresponding to agriculture sector: case study in iran’s caspian sea coastline. journal of cleaner production, 292, 125302, https://doi.org/10.1016/j.jclepro.2020.125302. pamucar, d., cirovic, g., zizovic, m.m., & miljkovic, b. (2020a). a model for determining weight coefficients by forming a non-decreasing series at criteria significance levels (ndsl). mathematics, 8(5), 745. pamucar, d., yazdani, m., obradovic, r., kumar, a., & torres-jiménez, m. (2020b). a novel fuzzy hybrid neutrosophic decision-making approach for the resilient supplier selection problem. international journal of intelligent systems, 35(12), 1934-1986. rezaei, f., & mohebbi-dehnavi, z. (2019). evaluation of the readiness of hospitals affiliated to isfahan university of medical sciences in unexpected events in 2017. journal of education and health promotion, 8. roy, j., adhikary, k., kar, s., & pamucar, d. (2018). a rough strength relational dematel model for analysing the key success factors of hospital service quality. decision making: applications in management and engineering, 1(1), 121-142. samsuddin, n. m., takim, r., nawawi, a. h., & alwee, s. n. a. s. (2018). disaster preparedness attributes and hospital’s resilience in malaysia. procedia engineering, 212, 371-378. sangiorgio, v., & parisi, p. (2020). a multicriteria approach for risk assessment of covid-19 in urban district lockdown. safety science, 130, 104862, https://doi.org/10.1016/j.ssci.2020.104862. sarkar, s. (2020). covid-19 susceptibility mapping using multicriteria evaluation. disaster medicine and public health preparedness, 14(4), 1-17. shabanikiya, h., jafari, m., gorgi, h. a., seyedin, h., & rahimi, a. (2019). developing a practical toolkit for evaluating hospital preparedness for surge capacity in disasters. international journal of disaster risk reduction, 34, 423-428. tabatabaei, s. a. n., & abbasi, s. (2016). risk assessment in social security hospitals of isfahan province in case of disasters based on the hospital safety index. international journal of health system and disaster management, 4(3), 82. verheul, m. l., & dückers, m. l. (2020). defining and operationalizing disaster preparedness in hospitals: a systematic literature review. prehospital and disaster medicine, 35(1), 61-68. multiple-criteria evaluation model for medical professionals assigned to temporary… 173 yildirim, f.s., sayan, m., sanlidag, t., uzun, b., uzun, d., & ozsahin, i. (2020). a clinical decision support system for the treatment of covid-19 with multi-criteria decisionmaking techniques. jmir medical informatics, 22, 88-106. zizovic, m.m, albijanic, m., jovanović, v., & zizovic, m (2019). a new method of multicriteria analysis for evaluation and decision making by dominant criterion. informatica, 30(4), 819-832. © 2018 by the authors. submitted for possible open access publication under the terms and conditions of the creative commons attribution (cc by) license (http://creativecommons.org/licenses/by/4.0/). plane thermoelastic waves in infinite half-space caused decision making: applications in management and engineering vol. 1, issue 2, 2018, pp. 16-33 issn: 2560-6018 eissn: 2620-0104 doi: https://doi.org/10.31181/dmame1802016b * corresponding author. e-mail addresses: ibrahim.badi@hotmail.com (i. badi), mohamed.ballem@eng.misuratau.edu.ly (a.m. abdulshahed) supplier selection using the rough bwm-mairca model: a case study in pharmaceutical supplying in libya ibrahim badi*1, mohamed ballem1 1 misurata university, faculty of engineering, mechanical engineering department, libya received: 2 january 2018; accepted: 30 june 2018; available online: 3 july 2018. original scientific paper abstract: the quality of health system in libya has witnessed a considerable decline since the revolution in 2011. one of the major problems this sector is facing is the loss of control over supply medicines and pharmaceutical equipments from international suppliers for both public and private sectors. in order to take the right decision and select the best medical suppliers among the available ones, many criteria have to be considered and tested. this paper presents a multiple criteria decision-making analysis using modified bwm (best-worst method) and mairca (multi-attribute ideal-real comparative analysis) methods. in the present case study five criteria and three suppliers are identified for supplier selection. the results of the study show that cost comes first, followed by quality as the second and company profile as the third relevant criterion. the model was tested and validated on a study of the optimal selection of supplier. key words: supplier selection, multi-criteria decision-making, rough numbers, bwm, mairca. 1 introduction selecting and managing medicines and pharmaceutical equipment supplies for primary health care services have a significant impact on the quality of patient care and represent a high proportion of health care costs. in developing countries health services need to choose appropriate supplies, equipment and drugs, in order to meet priority health needs and avoid wasting their limited resources. items can be inappropriate because they are technically unsuitable or incompatible with existing equipment, if spare parts are not available, or, because staff have not been trained to use them (kaur et al., 2001). recently, supplier evaluation and selection have received more attention from various researchers in the literature (mardani et al., mailto:ibrahim.badi@hotmail.com mailto:mohamed.ballem@eng.misuratau.edu.ly supplier selection using rough bwm-mairca model: a case study in pharmaceutical… 17 2016; de boer et al., 2001; govidan et al., 2015; chai et al., 2013; prakash et al., 2015; abdulshahed et al., 2017; badi et al., 2018; stević et al., 2017 a). supplier selection is a multi-criteria problem which includes both quantitative and qualitative factors (liang et al., 2013). generally, the criterion for supplier selection is highly dependent on individual industries and companies. therefore, different companies have different management strategies, enterprise culture and competitiveness. furthermore, company background can make a huge difference and can impact supplier selection. thus, the identification of supplier selection criteria is largely requiring the domain expert’s assessment and judgment. to select the best supplier, it is necessary to make a trade-off between these qualitative and quantitative factors some of which may be in conflict (ghodsypour & o'brien, 1998). the traditional supplier selection methods are often based on the quoted price, which ignores significant direct and indirect costs associated with quality, delivery, and service cost of purchased materials; however, uncertainty is present because the future can never be exactly predicted. the selection of the best supplier is done based on quoted price and considering all the possibilities of the analysis, but there is always uncertainty about indirect costs associated with quality, delivery time, and the like. one of the key problems in the supplier selection is to find the best supplier among several alternatives according to various criteria, such as service, cost, risk, and others. after identifying the criteria, a systematic methodology is required to integrate experts’ assessments in order to find the best supplier. at present, various methods have been used for the supplier selection, such as the analytic network process (anp) and the analytical hierarchy process (ahp) (porras-alvarado et al., 2017). ahp is a common multicriteria decision-making method; it is developed by saaty (saaty, 1979; saaty, 1990) to provide a flexible and easily understood way of analyzing complex problems. the method breaks a complex problem into hierarchy or levels, and then makes comparisons among all possible pairs in a matrix to give a weight for each factor and a consistency ratio. libya began privatizing the pharmaceutical system in 2003. pharmaceutical supplies were previously provided to both public and private sectors by the national company of pharmaceutical industry (ncpi), but drug companies are also permitted to market and supply their products to both public and private health sectors through local agencies. in 2009, over 300 international pharmaceutical manufacturers from europe, asia, and the middle east were registered as permitted drug suppliers for libya (alsageer, 2013). all the drugs consumed in libya are imported except few items, which are manufactured locally. the headquarters of the ncpi until 2003 was responsible for all drug manufacture and imports in libya. its branches are the channels of drugs distribution for governmental hospitals, private pharmacies, and clinics (khalifa et al., 2017). from 2004 till date the libyan secretariat of health, by executing a public tender through medical supply organization (mso), has been responsible for purchasing and distributing drugs to public hospitals and clinics. worth noting is that, on sporadic intervals, the budget has been allocated to the major public hospitals to locally purchase their own general drug demands. however, since 2011 (post-17th february 2011 revolution) mso has lost its control on importing medicines due to receiving many drugs as donations from different international sources without acceptable level of coordination (zhai et al., 2008); this has resulted in the supply of pharmaceuticals and medical equipment growing considerably in recent years. for badi & ballem/decis. mak. appl. manag. eng. 1 (2) (2018) 16-33 18 instance, in misrata (the third-largest city in libya) the number of companies operating in the field of medical supply exceeded 170 companies, and more than 425 companies in tripoli (capital city). the items that are supplied vary but the most common drugs are capsules, injections, ointments, inhalants, solutions, etc.; these drugs and materials are supplied from several countries, including arab (e.g. egypt, morocco, algeria, uae, and jordan), european (e.g. germany, switzerland, and britain), and asian ones (e.g. india, china, and malaysia) as well as america. the suppliers in each of these countries have some special characteristics distinguishing them from others. the closest arab countries have the ability to speed supply and hence the flexibility in providing these drugs more quickly than the rest. on the other hand, products coming from european countries are of better quality, but their prices are higher compared to competitors from other countries. thus, to make informed choices about what to buy and what to select among available suppliers, clear criteria for selection remain important, and efforts should be made to make suitable decision support tools for right decision-making. in this paper, a rough bwm-mairca model for selection of the best supplier is proposed. the presented model is used for the analysis of the supplier selection process in pharmaceutical supplies in libya. in this case study there are three suppliers with high medicine supplies to libya. in order to maintain confidentiality of the supplier, we have denoted the given suppliers as a, b, and c. 2. rough numbers in group decision-making problems, the priorities are defined with respect to multi-expert’s aggregated decision and process subjective evaluation of the expert’s decisions. rough numbers consisting of upper, lower and boundary interval, respectively, determine intervals of their evaluations without requiring additional information by relying only on original data (zhai et al., 2008). hence, the obtained expert decision-makers (dms) perceptions objectively present and improve their decision-making process. according to zhai et al. (2010), the definition of rough number is shown below. let’s u be a universe containing all objects and x be a random object from u . then we assume that there exists a set built with k classes representing dms preferences, 1 2( , ,..., )kr j j j with condition 1 2 ,..., kj j j   . then, , , 1qx u j r q k     lower approximation ( )qapr j , upper approximation ( )qapr j and boundary interval ( )qbnd j are determined, respectively, as follows:  ( ) / ( )q qapr j x u r x j   (1)  ( ) / ( )q qapr j x u r x j   (2)       ( ) / ( ) / ( ) / ( ) q q q q bnd j x u r x j x u r x j x u r x j         (3) the object can be presented with rough number (rn) defined with lower limit ( )qlim j and upper limit ( )qlim j , respectively: supplier selection using rough bwm-mairca model: a case study in pharmaceutical… 19 1 ( ) ( ) ( )q q l lim j r x x apr j m   (4) 1 ( ) ( ) ( )q q u lim j r x x apr j m   (5) where lm and um represent the sum of objects contained in the lower and upper object approximation of qj , respectively. for object qj , rough boundary interval  ( )qirbnd j presents an interval between the lower and the upper limits as: ( ) ( ) ( )q q qirbnd j lim j lim j  (6) the rough boundary interval presents measure of uncertainty. the bigger ( )qirbnd j value shows that variations in the experts’ preferences exist, while smaller values show that the experts have harmonized opinions without major deviations. in ( )qirbnd j are comprised all the objects between lower limit ( )qlim j and upper limit ( )qlim j of rough number ( )qrn j . that means that ( )qrn j can be presented using ( )qlim j and ( )qlim j . ( ) ( ), ( )q q qrn j lim j lim j    (7) since rough numbers belong to the group of interval numbers, arithmetic operations applied in interval numbers are also appropriate for rough numbers (zhu et al., 2015). 3. rough based best-worst method (r-bwm) in order to take into account the subjectivity that appears in group decisionmaking more comprehensively, in this study a modification of the best-worst method (bwm) is carried out using rough numbers (rn). the application of rn eliminates the necessity for additional information when determining uncertain intervals of numbers. in this way, the quality of the existing data is retained in group decisionmaking and the perception of experts is expressed in an objective way in aggregated best-to-others (bo) and others-to-worst (ow) matrices. since the method is very recent, the literature so far only has the traditional (crisp) bwm (rezaei, 2015; rezaei et al., 2015; rezaei, 2016; ren et al., 2017) and modification of the bwm carried out using fuzzy numbers (guo and zhao, 2017). also, stević et al., (stević et al., 2017b) used rough bwm to solve an internal transportation problem of the paper manufacturing company. the approach in this section introduces rn which enables a more objective expert evaluation of criteria in a subjective environment. the proposed modification of the bwm using rn (r-bwm) makes it possible to take into account the doubts that occur during the expert evaluation of criteria. r-bwm makes it possible to bridge the existing gap in the bwm methodology with the application of badi & ballem/decis. mak. appl. manag. eng. 1 (2) (2018) 16-33 20 a novel approach in the treatment of uncertainty based on rn. the following section presents the algorithm for the r-bwm that includes the following steps: step 1 determining a set of evaluation criteria. this starts from the assumption that the process of decision-making involves m experts. in this step, the experts consider a set of evaluation criteria and select the final one  1 2, ,... nc c c c , where n represents the total number of criteria. step 2 determining the most significant (most influential) and worst (least significant) criteria. the experts decide on the best and the worst criteria from the set of criteria  1 2, ,... nc c c c . if the experts decide on two or more criteria as the best, or worst, the best and worst criteria are selected arbitrarily. step 3 determining the preferences of the most significant (most influential) criteria (b) from set c over the remaining criteria from the defined set. under the assumption that there are m experts and n criteria under consideration, each expert should determine the degree of influence of best criterion b on criteria j ( 1, 2,...,j n ). this is how we obtain a comparison between the best criterion and the others. the preference of criterion b compared to the j-th criterion defined by the e-th expert is denoted with e bja ( 1, 2,...,j n ;1 e m  ). the value of each pair e bja takes a value from the predefined scale in interval  1, 9 e bja  . as a result a bestto-others (bo) vector is obtained: 1 2( , ,..., ); 1 e e e e b b b bna a a a e m   (8) where e bja represents the influence (preference) of best criterion b over criterion j, whereby 1 e bba  . this is how we obtain bo matrices 1 ba , 2 ba , …, m ba for each expert. step 4 determining the preferences of the criteria from set c over the worst criterion (w) from the defined set. each expert should determine the degree of influence of criterion j ( 1, 2,...,j n ) in relation to criterion w. the preference of criterion j in relation to criterion w defined by the e-th expert is denoted as e jwa ( 1, 2,...,j n ;1 e m  ). the value of each pair e jwa takes a value from the predefined scale in interval  1, 9ejwa  . as a result an others-to-worst (ow) vector is obtained: 1 2( , ,..., ); 1 e e e e w w w nwa a a a e m   (9) where e jwa represents the influence (preference) of criterion j in relation to criterion w, whereby 1 e wwa  . this is how we obtain ow matrices 1 wa , 2 wa , …, m wa for each expert. step 5 determining the rough bo matrix for the average answers of the experts. based on the bo matrices of the experts’ answers 1 e e b bj n a a      , we form matrices of the aggregated sequences of experts *e ba * 2 1 2 1 2 1 1 1 2 2 2 1 , , , ; , , ,; ; ; ; , e k m b b b b b b b bn bn bn n m m a a a a a aa a a a          (10) supplier selection using rough bwm-mairca model: a case study in pharmaceutical… 21 where  1 2, , ,e mbj bj bj bna a a a  represents sequences by means of which the relative significance of criterion b is described in relation to criterion j. using equations (1)-(7) each sequence e bja is transformed into rough sequence   ( ), ( )e e ebj bj bjr a lim an lim a    , where ( ) e bjlim a represent the lower limits, and ( ) e bjlim a the upper limit of rough sequence  ebjrn a , respectively. so for sequence  ebjrn a we obtain a bo matrix *1 ba , *2 ba , …, *m ba . by applying equation (11), we obtain the average rough sequence of the bo matrix 11 2 1 1 ( ) ( , ,..., ) 1 m l el bj bj ee bj bj bj bj m u eu bj bj e a a m rn a rn a a a a a m               (11) where e represents the e-th expert ( 1, 2,...,e m ), while  ebjrn a represents the rough sequences. we thus obtain the averaged rough bo matrix of average responses ba 1 2 1 , ,...,b b b bn n a a a a      (12) step 6 determining the rough ow matrix of average expert responses. based on the wo matrices of the expert responses 1 e e w jw n a a      , as with the rough bo matrices, for each element e jwa we form matrices of the aggregated sequences of the experts *e wa * 1 2 1 2 1 2 1 1 1 2 2 2 1 , , , ; , , ,; ; ; ; , e m m w w w w w w w nw nw nw m n a a a a aa a a a a          (13) where  1 2, , ,ejw jw jw n m wa a a a  represents sequence with which the relative significance of criterion j is described in relation to criterion w. as in step 5, using (1)-(7), sequences e jwa are transformed into rough sequences   ( ), ( )e e ejw jw jwr a lim an lim a    . thus for each rough sequence of expert e (1 e m  ) a rough bo matrix is formed. equation (14) is used to average the rough sequences of the ow matrix of the experts to obtain an averaged rough ow matrix. 11 2 1 1 ( ) ( , ,..., ) 1 m l el jw jw ee jw jw jw jw m u eu jw jw e a a m rn a rn a a a a a m               (14) badi & ballem/decis. mak. appl. manag. eng. 1 (2) (2018) 16-33 22 where e represents the e-th expert ( 1, 2,...,e m ), while ( )jwrn a represents the rough sequences. thus, we obtain the averaged rough ow matrix of average responses wa 1 2 1 , ,...,w w w nw n a a a a      (15) step 7 calculation of the optimal rough values of the weight coefficients of criteria 1 2[ ( ), ( ),..., ( )]nrn w rn w rn w from set c . the goal is to determine the optimal value of the evaluation criteria, which should satisfy the condition that the difference in the maximum absolute values (16) ( )( ) ( ) ( ) ( ) ( ) jb bj jw j w rn wrn w rn a and rn w rn w rn w   (16) for each value of j is minimized. in order to meet these conditions, the solution that satisfies the maximum differences according to the absolute value ( ) ( ) ( ) b bj j rn w rn a rn w  and ( ) ( ) ( ) j jw w rn w rn w rn w  should be minimized for all values of j. for all values of the interval rough weight coefficients of the criteria ( ) ( ), ( ) [ , ] l u j j j j jrn w lim w lim w w w    the condition is met that 0 1 l u j jw w   for each evaluation criterion jc c . weight coefficient jw belongs to interval [ , ] l u j jw w , that is l u j jw w for each value 1, 2,...,j n . on this basis we can conclude that in the case of the rough of the weight coefficients of the criteria the condition is met that 1 1 n l jj w   and 1 1 n u jj w   . in this way the condition is met that the weight coefficients are found at interval [0,1], ( 1, 2,..., )jw j n  and that 1 1 n jj w   . the previously defined limits will be presented in the following min-max model: 1 1 ( )( ) min max ( ) , ( ) ( ) ( ) . . 1 1; , 1, 2,..., , 0, 1, 2,..., jb bj jw j j w n l jj n u jj l u j j l u j j rn wrn w rn a rn w rn w rn w s t w w w w j n w w j n                             (17) where ( ) ( ), ( ) [ , ] l u j j j j jrn w lim w lim w w w    is the rough weight coefficient of a criterion. model (17) is equivalent to the following model: supplier selection using rough bwm-mairca model: a case study in pharmaceutical… 23 1 1 min . . ; ; ; ; 1; 1; , 1, 2,..., , 0, 1, 2,..., l u u l b b bj bj u l j j l u u lj j jw jw u l w w n l jj n u jj l u j j l u j j s t w w a a w w w w a a w w w w w w j n w w j n                                       (18) where ( ) [ , ] l u j j jrn w w w represents the optimum values of the weight coefficients, ( ) [ , ] l u b b brn w w w and ( ) [ , ] l u w w wrn w w w represents the weight coefficients of the best and worst criterion, respectively, while ( ) , l u jw j jrn a a a      and ( ) , l u bj bj bjrn a a a      , respectively, represent the values from the average rough ow and rough bo matrices (see equations (12) and (15)). by solving model (18) we obtain the optimal values of the weight coefficients of evaluation criteria 1 2[ ( ), ( ),..., ( )]nrn w rn w rn w and *  . the consistency ratio of the rough bwm the consistency ratio is a very important indicator by means of which we check the consistency of the pair wise comparison of the criteria in the rough bo and rough ow matrices. definition 1 comparison of the criteria is consistent when condition ( ) ( ) ( )bj jw bwrn a rn a rn a  is fulfilled for all criteria j, where ( )bjrn a , ( )jwrn a and ( )bwrn a , respectively, represent the preference of the best criterion over criterion j, the preference of criterion j over the worst criterion, and the preference of the best criterion over the worst criterion. however, when comparing the criteria it can happen that some pairs of criteria j are not completely consistent. therefore, the next section defines consistency ratio (cr), which gives us information on the consistency of the comparison between the rough bo and the rough ow matrices. in order to show how cr is determined we start from calculation of the minimum consistency when comparing the criteria, which is explained in the following section. as previously indicated, the pair wise comparison of the criteria is carried out based on a predefined scale in which the highest value is 9 or any other maximum from a scale defined by the decision-maker. the consistency of the comparison badi & ballem/decis. mak. appl. manag. eng. 1 (2) (2018) 16-33 24 decreases when ( ) ( )bj jwrn a rn a is less or greater than ( )bwrn a , that is when ( ) ( ) ( )bj jw bwrn a rn a rn a  . it is clear that the greatest inequality occurs when ( )bjrn a and ( )jwrn a have the maximum values that are equal to ( )bwrn a , which continues to affect the value of  . based on these relationships we can conclude that ( ) ( ) ( ) ( ) ( ) ( )b j j w b wrn w rn w rn w rn w rn w rn w        (19) as the largest inequality occurs when ( )bjrn a and ( )jwrn a have their maximum values, then we need to subtract the value  from ( )bjrn a and ( )jwrn a and add ( )bwrn a . thus we obtain equation (20)  ( ) ( ) ( )bj jw bwrn a rn a rn a             (20) since for the minimum consistency ( ) ( ) ( )bj jw bwrn a rn a rn a  applies, we present equation (20) as        2 2 ( ) ( ) ( ) 1 2 ( ) ( ) ( ) 0 bw bw bw bw bw bw rn a rn a rn a rn a rn a rn a                    (21) since we are using rough numbers, and if there is no consensus between the dm on their preferences of the best criterion over the worst criterion, then ( )bwrn a will not have a crisp value but we will use ( ) , l u bw bw bwrn a a a      . since for rn condition l u bw bwa a applies, we can conclude that the preference of the best criterion over the worst cannot be greater than u bwa . in this case, when we use upper limit u bwa for determining the value of ci, then all the values connected with ( )bwrn a can use the ci obtained for calculating the value of cr. we can conclude this from the fact that the consistency index which corresponds to u bwa has the highest value in interval , l u bw bwa a      . based on this conclusion we can transform equation (21) in the following way:     22 1 2 0 u u u bw bw bwa a a      (22) by solving equation (22) for the different values of u bwa we can determine the maximum possible values of  ,which is the ci for the r-bw method. since we obtain the values of ( )bwrn a , i.e. u bwa on the basis of the aggregated decisions of the dm, and these change the ivfrn interval, it is not possible to predefine the values of  . the values of  depend on uncertainties in the decisions, since uncertainties change the rn interval. as explained in the algorithm for the r-bw method, interval , l u bw bwa a   changes depending on uncertainties in evaluating the criteria. supplier selection using rough bwm-mairca model: a case study in pharmaceutical… 25 if the dm agree on their preference for the best criterion over the worst then bwa represents the crisp value of bwa from the defined scale and then the maximum values of  apply for different values of  1, 2,...,9bwa  , table 1. table1 values of the consistency index (ci) bwa 1 2 3 4 5 6 7 8 9 ci ( max ) 0.00 0.44 1.00 1.63 2.30 3.00 3.73 4.47 5.23 in table 1 values bwa are taken from the scale  1, 2,...,9 which is defined in rezaei (2015). on the basis of ci (table 1) we obtain consistency ratio (cr) * cr ci   (23) cr takes values from interval  0,1 , where the values closer to zero show high consistency while the values of cr closer to one show low consistency. 4. rough mairca method the basic assumption of the mairca method is to determine the gap between ideal and empirical weights. the summation of the gaps for each criterion gives the total gap for every observed alternative. finally, alternatives will be ranked, and the best ranked alternative is the one with the smallest value of the total gap. the mairca method shall be carried out in 6 steps (pamučar et al., 2014; gigović et al., 2016): step 1 formation of the initial decision matrix ( y ). the first step includes evaluation of l alternatives per n criteria. based on response matrices yk=[ykij]l×nby all m experts we obtain matrix * y of aggregated sequences of experts 1 2 1 2 1 2 11 11 11 12 12 12 1 1 1 1 2 1 2 1 2 * 21 21 21 22 22 22 2 2 2 1 2 1 2 2 1 1 1 1 2 2 2 , , , ; , , , , , , , ; , , , , , , , ; , , , ; ; ; ; ; ; ; ; ; , n n n n n n n n n n n n nn nn n m m m m m m m m n m y y y y y y y y y y y y y y y y y y y y y y y y y y y y                           (24) where  1 2, , ,ij ij m ij ijy y y y  denote sequences for describing relative importance of criterion i in relation to alternative j. by applying equations (1) through (7), sequences m ijy are transformed into rough sequences  ij m rn y . consequently, rough matrices y1l, y2l, …,yml will be obtained for rough sequence  ij m rn y , where m denotes the number of experts. therefore, for the group of rough matrices y1, y2, …,ym we obtain rough sequences    1 1 2 2( ), ( ) , ( ), ( ) ,..., ( ), ( )ij ij ij ij jmj ij imir llim y lim y lim im y ln y y y im yl im           . by applying equation (25), we obtain mean rough sequences badi & ballem/decis. mak. appl. manag. eng. 1 (2) (2018) 16-33 26 11 2 1 1 ( ) ( , ,..., ) 1 m l el ij ij ee ij ij ij ij m u eu ij ij e y y m rn y rn y y y y y m               (25) where e denotes e-th expert ( 1, 2,...,e m ), ( )ijrn y denotes rough number ( ) ( ), ( )ijij ijr y yn y lim lim    . in such a way, rough vectors       1 2, ,...,i i i ina rn y rn y rn y of mean initial decision matrix is obtained, where ( ) ( ), ( ) , l u ij ijij ijijyrn y lim lim yy y        denotes value of i -th alternative as per j -th criterion ( 1, 2,..., ;i l 1, 2,...,j n ). 1 2 1 11 12 1 2 21 22 2 1 2 ... ( ) ( ) ... ( ) ( ) ( ) ( ) ... ... ... ... ... ( ) ( ) ... ( ) n n n l l l ln l n c c c a rn y rn y rn y a rn y rn y rn y y a rn y rn y rn y              (26) where l denotes the number of alternatives, and n denotes total sum of criteria. step 2define preferences according to selection of alternatives ia p . when selecting alternative, the decision maker (dm) is neutral, i.e. does not have preferences to any of the proposed alternatives. since any alternative can be chosen with equal probability, preference per selection of one of l possible alternatives is as follows: 1 1 ; 1, 1, 2,..., i i l a a i p p i l l     (27) where l denotes the number of alternatives. step 3 calculate theoretical evaluation matrix elements ( pt ). theoretical evaluation matrix ( pt ) is developed in l x n format (l denotes the number of alternatives, n denotes the number of criteria). theoretical evaluation matrix elements ( ( )pijrn t ) are calculated as the multiplication of the preferences according to alternatives ia p and criteria weights ( ( ), 1, 2,...,irn w i n ) obtained by application of r-bwm. 1 2 1 2 11 12 1 21 22 2 1 2 ( ) ( ) ... ( ) ( ) ( ) ... ( ) ( ) ( ) ( ) ... ... ... ...... ( ) ( ) ... ( ) l n a p p p n a p p p n p pl pl plna l n rn w rn w rn w p rn t rn t rn t p rn t rn t rn t t rn t rn t rn tp               (28) supplier selection using rough bwm-mairca model: a case study in pharmaceutical… 27 where ia p denotes preferences per selection of alternatives, ( )irn w weight coefficients of evaluation criteria, and ( )pijrn t theoretical assessment of alternative for the analyzed evaluation criterion. elements constituting matrix tp will be then defined by applying equation (29) ( ) , l u pij ai i ai i it p rn w p w w      (29) since dm is neutral to the initial selection of alternatives, all preferences ( ia p ) are equal for all alternatives. since preferences ( ia p ) are equal for all alternatives, then matrix (28) will have 1 x n format ( n denotes the number of criteria). 1 2 1 1 2 2 1 ( ) ( ) ... ( ) , , , ... , i n l u l u l u p a p p p p pn pn xn rn w rn w rn w t p t t t t t t              (30) where n denotes the number of criteria, ia p preferences according to selection of alternatives,  irn w weight coefficients of evaluation criteria. step 4 determination of real evaluation ( rt ). calculation of the real evaluation matrix elements ( rt ) is done by multiplying real evaluation matrix elements ( pt ) and elements of initial decision-making matrix ( x ) according to the following equation: ( ) ( ) ( ) , , l u l u rij pij nij pij pij ij ij rn t rn t rn x t t y y           (31) where ( )pijrn t denotes elements of theoretical assessment matrix, and ( ) ij rn y denotes elements of normalized matrix ( ) ij l n y rn y      . normalization of the mean initial decision matrix (25) is done by applying equation (32) and (33) ( ) ( ), ( ) , , l u l u ij ij ij ij ij ij ij ij ij ij ij ij ij y y y y rn y lim y lim y y y y y y y                         (32) b) for the „cost“ type criteria (lower criterion value is preferable) ( ) ( ), ( ) , , u l l u ij ij ij ij ij ij ij ij ij ij ij ij ij y y y y irn y lim y lim y y y y y y y                         (33) where iy  and iy  denote minimum and maximum values of the marked criterion by its alternatives, respectively:  min lij ij j y y   (34)  max uij ij j y y   (35) step 5 calculation of total gap matrix ( g ). elements of g matrix are obtained as difference (gap) between theoretical ( pijt ) and real evaluations ( rijt ), or by actually badi & ballem/decis. mak. appl. manag. eng. 1 (2) (2018) 16-33 28 subtracting the elements of theoretical evaluation matrix ( pt ) with the elements of real evaluation matrix ( rt ) 11 12 1 21 22 2 1 2 ( ) ( ) ... ( ) ( ) ( ) ... ( ) ... ... ... ... ( ) ( ) ... ( ) n n p r l l ln l n rn g rn g rn g rn g rn g rn g g t t rn g rn g rn g                (36) where n denotes the number of criteria, l denotes the number of alternatives, and gij represents the obtained gap of alternative i as per criterion j. gap gij takes values from the interval rough number according to equation (37) ( ) ( ) ( ) , , ij l u l u ij pij r pij pij rij rijrn g rn t rn t t t t t          (37) it is preferable that ( )ijrn g value goes to zero ( ( ) 0ijrn g  ) since the alternative with the smallest difference between theoretical ( ( )pijrn t ) and real evaluation ( ( )rijrn t ) shall be chosen. if alternative ia for criterion ic has a theoretical evaluation value equal to the real evaluation value ( ( ) ( )pij rijrn t rn t ) then the gap for alternative ia for criterion ic is zero, i.e. alternative ia per criterion ic is the best (ideal) alternative. if alternative ia for criterion ic has a theoretical evaluation value ( )pijrn t and the real ponder value is zero, then the gap for alternative ia for criterion ic is ( ) ( )ij pijrn g rn t . this means that alternative ia for criterion ic is the worst (anti-ideal) alternative. step 6 calculation of the final values of criteria functions ( iq ) per alternatives. values of criteria functions are obtained by summing the gaps from matrix (36) for each alternative as per evaluation criteria, i.e. by summing matrix elements ( g ) per columns as shown in equation (38) 1 ( ) ( ), 1, 2,..., n i ij j rn q rn g i m    (38) where n denotes the number of criteria, m denotes the number of the chosen alternatives. ranking of alternatives can be done by applying rules governing ranking of rough numbers described in (stević et al., 2017). 5. calculation part application of the hybrid rough bwm-mairca model is shown using a case study related to the selection of an optimal supplier selection in libya. based on an analysis of the available literature and expert evaluation of suppliers, five criteria were used: price and costs (c1), quality (c2), supplier profile (c3), delivery (c4) and flexibility (c5). four experts took part in the research. the r-bwm was used to determine the weight coefficients of the criteria. after defining the criteria for evaluation, the supplier selection using rough bwm-mairca model: a case study in pharmaceutical… 29 experts also determined the best (b) and worst (w) criteria. on this basis, the experts determined the bo and ow matrices in which the preferences of the b and w over the criteria were considered for the remaining criteria from the defined set. evaluation of the criteria was carried out using a scale  1,9eija  [18]. the bo and ow matrices are presented in table 2. table 2 the bo and ow expert evaluation matrices best: c1 expert evaluation worst: c5 expert evaluation c1 1, 1, 1, 1 c1 8, 7, 8, 7 c2 2, 2, 3, 3 c2 4, 4, 3, 4 c3 2, 3, 3, 2 c3 4, 4, 5, 5 c4 4, 5, 5, 4 c4 2, 3, 2, 3 c5 8, 8, 9, 9 c5 1, 1, 1, 1 using equations (1)-(7) the evaluations in the bo and ow matrices were transformed into rough numbers. after transforming crisp numbers into rough numbers, equations (9)-(15) were used to transform the bo and ow of the expert matrices into aggregated rough bo and rough ow matrices, table 3. table 3 aggregating the rough bo and rough ow matrices best: c1 rn worst: c5 rn c1 [1.00, 1.00] c1 [7.25, 7.75] c2 [2.25, 2.75] c2 [3.56, 3.94] c3 [2.25, 2.75] c3 [4.25, 4.75] c4 [4.25, 4.75] c4 [2.25, 2.75] c5 [8.25, 8.75] c5 [1.00, 1.00] on the basis of the rough bo and rough ow matrices for criteria, the optimal values of the rough weight coefficients of the criteria were calculated. based on model (18) the optimal values of the weight coefficients of the criteria were calculated, table 4. table 4 optimal values of the criteria criterion weights rank c1 [0.4113, 0.4286] 1 c2 [0.2035, 0.2169] 2 c3 [0.1498, 0.1576] 3 c4 [0.1062, 0.1424] 4 c5 [0.0667, 0.0748] 5 by solving the model (18) the value of * is obtained, * 0.8464  . the value of *  is used to determine consistency ratio (cr=0.16), equation (23). since we obtain the value of bwa i.e. u bwa on the basis of the aggregated decisions of the experts, and they affect the interval of the rn, it is not possible to predefine the values of badi & ballem/decis. mak. appl. manag. eng. 1 (2) (2018) 16-33 30 consistency index  . using equation (22), the values of consistency index (  ) is defined (ci=5.04). after calculating the weight coefficients of the criteria, expert evaluation of the alternatives was carried out with the predefined evaluation criteria. once the evaluation process is completed by applying equations from (24) through (26) decisions were aggregated and initial decision-making matrix * y obtained, table 5. evaluation of the alternatives was carried out using a scale  1, 5eijy  . table 5 aggregated initial decision-making matrix criteria/ alternatives c1 c2 c3 c4 c5 a1 [2.05, 2.39] [2.06, 2.43] [2.23, 2.73] [2.25, 3.20] [1.98, 2.86] a2 [2.43, 3.44] [4.58, 4.95] [2.10, 2.77] [4.55, 4.93] [4.00, 4.00] a3 [4.26, 4.76] [4.55, 4.93] [4.54, 4.93] [4.46, 5.00] [4.46, 5.00] after aggregation of evaluated criteria (table 5) preferences were determined as per selection of alternatives pai=1/m=0.33, where m denotes the number of alternatives and pa1=pa2=pa3=0.33. based on preferences pai, and by applying equation (29), theoretical evaluation matrix (tp) rank 1xn, will be obtained. in order to determine real evaluation matrix tr (table 6), elements of the theoretical evaluation matrix will be multiplied with normalized elements of the aggregated initial decision matrix. table 6 real evaluation matrix tr criteria/ alternatives c1 c2 c3 c4 c5 a1 [0.12, 0.14] [0.00, 0.01] [0.00, 0.01] [0.00, 0.02] [0.00, 0.01] a2 [0.07, 0.12] [0.06, 0.07] [0.00, 0.01] [0.03, 0.05] [0.01, 0.02] a3 [0.00, 0.03] [0.06, 0.07] [0.04, 0.05] [0.03, 0.05] [0.02, 0.02] normalization of the initial decision-making aggregated matrix will be done by applying equations (32) and (33). in next step, elements of theoretical evaluation matrix (tp) will be deducted from the elements of real evaluation matrix (tp) to obtain total gap matrix (g). by summing up the rows of the total gap matrix we obtain the total gap for every alternative, equation (37). based on the obtained values of the total gap between theoretical and real evaluations, the initial evaluation of alternatives will be performed, table 7. table 7 values of the total gap of alternatives and their ranking alternatives alternative gap rn(qi) rank a1 [0.13, 0.22] 3 a2 [0.04, 0.17] 1 a3 [0.09, 0.19] 2 supplier selection using rough bwm-mairca model: a case study in pharmaceutical… 31 6 conclusion supplier selection is a very important step in the purchasing process; therefore, to carry out the selection process, it is first important to identify the criteria for selection. this is particularly important for a company operating in the pharmaceutical industry and working mainly with international suppliers. the study addresses the problem of medicine supply from international suppliers for both public and private sectors in libya. five criteria and three suppliers are identified for supplier selection in this problem. this multiple criteria decision-making analysis problem is solved using the rough bwm method. as a result of the presented calculations, it is shown that cost comes first, followed by quality as the second and company profile as the third relevant criterion. references abdulshahed, a. m., badi, i. a. & blaow, m.m.. (2017). a grey-based decision-making approach to the supplier selection problem in a steelmaking company: a case study in libya. grey systems: theory and application, 7(3), 385-396. badi, i. a., abdulshahed, a. m. & shetwan, a. (2018). a case study of supplier selection for steelmaking company in libya by using combinative distance-based assessemnt (codas) model. decision-making: applications in management and engineering, 1(1). 1-12. chai, j., liu, j. n. & ngai, e. w. (2013). application of decision-making techniques in supplier selection: a systematic review of literature. expert systems with applications, 40, 3872-3885. de boer, l., labro, e. & morlacchi, p. (2001). a review of methods supporting supplier selection. european journal of purchasing & supply management, 7, 75-89. ghodsypour, s. h. & o'brien, c. (1998). a decision support system for supplier selection using an integrated analytic hierarchy process and linear programming. international journal of production economics, 56, 199-212. gigović, lj., pamučar, d., bajić, z. & milićević, m. (2016). the combination of expert judgment and gis-mairca analysis for the selection of sites for ammunition depot, sustainability, 8(4), article no. 372, 1-30. govindan, k., rajendran, s., sarkis, j. & murugesan, p. (2015). multi criteria decisionmaking approaches for green supplier evaluation and selection: a literature review. journal of cleaner production, 98, 66-83. guo, s. & zhao, h. (2017). fuzzy best-worst multi-criteria decision-making method and its applications. knowledge-base d systems, 121, 23-31. kaur, m., hall, s., & attawell, k. (2001). medical supplies and equipment for primary health care: a practical resource for procurement and management. echo international health services. khalifa a.a., bukhatwa s.a. & elfakhri m.m. (2017). antibiotics consumption in the eastern region of libya 2012-2013, libyan international medical university journal, 2, 55-63. https://www.emeraldinsight.com/author/badi%2c+ibrahim+a https://www.emeraldinsight.com/author/blaow%2c+mohamed+mehemed https://www.emeraldinsight.com/author/badi%2c+ibrahim+a badi & ballem/decis. mak. appl. manag. eng. 1 (2) (2018) 16-33 32 liang, w. y., huang, c.-c., lin, y.-c., chang, t. h. & shih, m. h. (2013). the multiobjective label correcting algorithm for supply chain modeling. international journal of production economics, 142, 172-178. mardani, a., e. k. zavadskas, z. khalifah, a. jusoh & k. m. nor. (2016). multiple criteria decision-making techniques in transportation systems: a systematic review of the state of the art literature. transport, 31 (3), 359-385. alsageer, m. a. (2013), analysis of informative and persuasive content in pharmaceutical company brochures in libya. the libyan journal of pharmacy & clinical pharmacology, 2, 951-988. pamučar d., vasin, lj. & lukovac l. (2014). selection of railway level crossings for investing in security equipment using hybrid dematel-marica model, xvi international scientific-expert conference on railway, railcon 2014, 89-92. porras-alvarado, j. d., murphy, m. r., wu, h., han, z., zhang, z. & arellano, m. year. (2017). an analytical hierarchy process to improve project prioritization in the austin district. transportation research board 96th annual meeting. prakash, s., soni, g. & rathore, a. p. s. (2015). a grey based approach for assessment of risk associated with facility location in global supply chain. grey systems: theory and application, 5, 419-436. ren, j., liang, h. & chan, f.t.s. (2017). urban sewage sludge, sustainability, and transition for eco-city: multi-criteria sustainability assessment of technologies based on best-worst method. technological forecasting & social change, 116, 29-39. rezaei, j. (2015). best-worst multi-criteria decision-making method. omega, 53, 4957. rezaei, j. (2016). best-worst multi-criteria decision-making method: some properties and a linear model. omega, 64, 126–130. rezaei, j., wang, j., tavasszy, l. (2015). linking supplier development to supplier segmentation using best worst method. expert systems with applications, 42, 91529164. saaty, t. l. (1979). applications of analytical hierarchies. mathematics and computers in simulation, 21, 1-20. saaty, t. l. (1990). decision-making for leaders: the analytic hierarchy process for decisions in a complex world, rws publications. stević, ž., pamučar, d., kazimieras zavadskas, e., ćirović, g., & prentkovskis, o. (2017 b). the selection of wagons for the internal transport of a logistics company: a novel approach based on rough bwm and rough saw methods. symmetry, 9(11), 264. stević, ž., pamučar, d., vasiljević, m., stojić, g., & korica, s. (2017 a). novel integrated multi-criteria model for supplier selection: case study construction company. symmetry, 9(11), 279. zhai, l.y., khoo, l.p., &zhong, z.w. (2008). a rough set enhanced fuzzy approach to quality function deployment. international journal of advanced manufacturing technology, 37 (5–6), 613-624. supplier selection using rough bwm-mairca model: a case study in pharmaceutical… 33 zhai, l.y., khoo, l.p., & zhong, z.w. (2010). towards a qfd-based expert system: a novel extension to fuzzy qfd methodology using rough set theory. expert systems with applications, 37(12), 8888-8896. zhu, g.n., hu, j., qi, j., gu, c.c. & peng, j.h. (2015). an integrated ahp and vikor for design concept evaluation based on rough number, advanced engineering informatics, 29, 408–418. © 2018 by the authors. submitted for possible open access publication under the terms and conditions of the creative commons attribution (cc by) license (http://creativecommons.org/licenses/by/4.0/). decision making: applications in management and engineering vol. 4, issue 2, 2021, pp. 106-125. issn: 2560-6018 eissn: 2620-0104 doi: https://doi.org/10.31181/dmame210402106a * corresponding author. e-mail address: taliparsu@aksaray.edu.tr (t. arsu) investigation into the efficiencies of european football clubs with bi-objective multi-criteria data envelopment analysis talip arsu 1* 1 vocational school of social sciences, university of aksaray, turkey received: 15 march 2021; accepted: 24 may 2021; available online: 12 june 2021. original scientific paper abstract: a financially successful football club can achieve sporting achievements as well as become financially stable. in other words, the success of football clubs depends on both financial and sportive success. contrary to the studies in the literature that focus on financial and sportive success separately, the present study aimed to examine the 5-season activities of 10 football clubs in the big-five league, which are the top leagues of europe, by using financial and sports criteria. bi-objective multi criteria data envelopment analysis (bio-mcdea) was used for the efficiency analysis. in the study, the number of social media followers, the average number of viewers and total market value were used as input, and the uefa club score and total revenues were used as output. as a result, arsenal, paris saint-germain, and juventus were determined as efficient in the 2015-2016 season, paris saintgermain and liverpool in the 2016-2017 season, manchester united, paris saint-germain and chelsea in the 2016-2017 season, manchester united, real madrid, bayern munich and arsenal in the 2018-2019 season, manchester united, paris saint-germain and chelsea. the reasons why psg was the most successful club in the efficiency analysis (efficient in four out of five seasons) were examined. in addition, in the sensitivity analysis conducted to determine the effect of inputs and outputs on the model, it was concluded that efficiency was highly related to financial data. keywords: european football clubs, efficiency, multi-criteria data envelopment analysis, bi-objective multi-criteria data envelopment analysis mailto:taliparsu@aksaray.edu.tr investigation into the efficiencies of european football clubs with bi-objective multi-criteria data envelopment analysis 107 1. introduction football is the most popular sport in the world. although there are many factors that underlie this popularity, the simplicity of the rules and the low cost can be considered as the most important factors (galariotis et al., 2018). however, in professional football, which has undergone a great transformation since the early 1990s, footballer salaries have started to increase exponentially (dobson & goddard, 2011). the bosman ruling introduced by the european court of justice in 1995 had a significant impact on the future of european football. the bosman ruling was named after the belgian midfielder jean-marc bosman's lawsuit that was filed for blocking his transfer from belgium to france at the end of his contract. the bosman ruling included the liberalization of the immigration of professional athletes within the eu and the abolition of transfer fees after the expiry of contracts. in addition, restrictions on the number of eu players that clubs can have playing on the field were also considered illegal according to bosman ruling (marcén, 2019). after the bosman ruling was recognized by uefa in march 1996, the transfers of football players between teams began to be carried out at astronomical figures. in addition, the fact that broadcasting contracts yielded an unimaginable scale of revenue just a few years ago, the complete reconstruction of many football fields, and the immeasurable increase in the importance of commercial sponsorship and merchandising increased the importance of football's financial infrastructure (dobson & goddard, 2011). football clubs are no longer organizations that only provide emotional and symbolic satisfaction to their supporters and focus on sporting success without profit. instead, football clubs have become a complex system in which investors invest capital and expect financial returns (miragaia et al., 2019). this development in professional football has turned football from being not only a sport branch in europe but also an industry branch. the revenues and brand values of football clubs have become competitive with many industries and brands. spain (la liga), england (premier league), italy (serie a), germany (bundesliga) and france (ligue 1), which are called the “big five league”, constitute a large part of the world football industry. the big five league, which ha s gained great value in the last 20 years, increased its total value from eur 2.95 billion in 1998 to eur 26.8 billion in 2021 (transfermarkt.com, 2021). however, these financial values are not governed by all the clubs in the big five league, but only the top 10 clubs in europe in terms of both sporting success and financial standing. manchester united, real madrid, fc barcelona, bayern munich, manchester city, arsenal, paris saint-germain (psg), chelsea, liverpool and juventus have a value of eur 7.96 billion, which is almost one third of all other clubs total value of the big five league (transfermarkt.com, 2021). the growth of the football industry at this scale in as little as 20 years has brought along both control and financial difficulties. although football clubs have many financial resources, these resources are largely related to sporting success. in other words, football clubs must be continuously successful in order to avoid experiencing financial difficulties, which is not possible. as the financial difficulties experienced by football clubs are beginning to become continuous, uefa has brought some restrictions on clubs with a regulation called financial fair play (ffp). ffp, which entered into force in 2009 and is updated every three years, basically aims to improve the economic and financial capabilities of the clubs and increase both their transparency and reliability. at the same time, thanks to the ffp, which aims to bring more discipline and rationality to club football financing, the ratio of net debts of clubs to their income has decreased from 65% to 35% in a short period of time (uefa, arsu/decis. mak. appl. manag. eng. 4 (2) (2021) 106-125 108 2021). in order to achieve this financial success, uefa has imposed many restrictions on clubs and imposed severe penalties such as the deletion of points, transfer restrictions and ban from tournaments, for those who do not comply with these restrictions. football clubs, which are suppressed by uefa, are also trying to meet the sportive success expectations of the stakeholders. it does not seem realistic to evaluate these two processes independently from each other in football clubs where financial success supports sportive success. some studies in the literature have carried out financial evaluations by only considering the financial data of clubs (pradhan et al., 2016; chelmis et al., 2019), some have focused only on sportive success (rossi et al., 2019; salabun et al.2020) and others have tried to associate financial success with sportive success (sakınç et al., 2017; galariotis et al., 2018). however, the success of football clubs is possible with the realization of the financial and sportive success together in this cycle. the aim of this study, which was designed with the motivation of the idea of realizing financial and sportive success together and the lack in the literature, was to investigate the efficiency values for three seasons of 10 football clubs that are at the top in terms of both sport and finance in europe. when conducting the effectiveness analysis, the bi-objective multiple criteria data envelopment analysis (bio-mcdea) method, recommended by ghasemi (2014) to eliminate the low discrimination problem of the classical data envelopment analysis (dea), was used. the paper begins with a detailed literature review in section 2. in the section 3, firstly, the classical dea and multiple criteria data envelopment analysis (mcdea) methods that form the basis of the bio-mcdea model are introduced and the biomcdea model is shown. in the data title at the end of the section 3, how the criteria used in this study were determined, the source of the data used as criteria and the criteria values are shown. in section 4, the findings of the study are presented and in section 5, the findings are discussed. in section 6, the sensitivity analysis is given to determine the contribution of each criterion to the model. in the last section, the conclusions, advantages and limitations of the study and managerial implications are given. 2. literature review the popularity of football around the world and the huge budgets managed by football clubs have made the football industry the subject of many academic studies conducted to examine the sportive or financial performances of national and international leagues, clubs and even players. in many of these studies, mcdm methods have been used for performance evaluation. pradhan et al. (2016) investigated the financial performance of italian clubs using gray relation analysis (gra), galariotis et al. (2018) determined the business, financial and sports performance of clubs in the french league using the promethee ii method, sakınç et al. (2017) studied the financial and sporting performance of 22 european clubs using the topsis method, chelmis et al. (2019) investigated the financial, commercial and sporting performance of clubs in the greek league using promethee ii and salabun et al. (2020) determined the performance of football players using the characteristic objects method (comet) and topsis method. in addition to these methods used, the most used mcdm method is dea which was developed by charnes et al. (1978). in recent years dea has been used in many decision problems such as the effectiveness of agricultural practices (angulo-meza et al., 2019), financial performance assessment (anthony et al., 2019), hospital efficiency assessments (kohl et al., 2019), sustainability assessment of the water sector (lombardi et al., 2019), bank activities investigation into the efficiencies of european football clubs with bi-objective multi-criteria data envelopment analysis 109 assessments (kamarudin et al., 2019), sustainable supplier selection (rashidi & cullinane, 2019), efficiency assessment of railway enterprises (blagojević et al., 2020), assessment of medium and large-sized industries in the diversity sector (hassanpour, 2020). dea studies in the literature generally consist of efficiency analyses conducted for all teams in the league of a specific country. dea was used to determine the efficiencies of the teams in england’s premier league (pestana barros & leach, 2006; guzman & morrow, 2007; haas, 2003a; kern et al., 2012), germany’s bundesliga (haas et al., 2004), france’s ligue 1 (jardin, 2009), usa’s major league soccer (mls) (haas, 2003b), italian serie a (rossi et al., 2019), and brazil’s serie a (pestana barros et al., 2010). in addition, the efficiencies of european clubs (halkos & tzeremes, 2013; miragaia et al., 2019) and national teams participating in euro 2012 (rubem & brandao, 2015) were determined using dea. however, no study examining the 5season efficiency values of 10 top european clubs which make up half of the total value of the big five league were found in the literature. furthermore, classical dea was used in almost all efficiency studies in the literature. although classical dea is a widely used nonparametric efficiency instrument, it has the disadvantage of low discrimination power. in order to avoid this disadvantage, the bio-mcdea model, which was developed by ghasemi et al. (2014) and has been used in decision problems such as the equipment efficiency assessment for automotive industry (da silva et al., 2017), port efficiency assessment (de andrade et al., 2019), electrical distribution units efficiency assessment (ghofran et al., 2021), was used in this study. 3. material and methods bio-mcdea is a goal programming based efficiency determination model developed by ghasemi et al. (2014) in which the dea model aims to improve the discrimination power. bal et al. (2010) proposed the goal programming data envelopment analysis (gpdea) model which is based on goal programming that would eliminate the problem of discrimination power and weight distribution of the dea model. the gpdea model is based on solving unwanted deviations using equal weight. bio-mcdea was used in the present study to exclude classical dea and thus avoid the disadvantage of low discrimination power and because its solution steps are easier. 3.1. multiple criteria data envelopment analysis (mcdea) classical dea is a widely used non-parametric analysis for efficiency analysis, used especially in social sciences. the conversion of the classical dea method into a linear programming form proposed by charnes et al. (1978) is shown below. 𝑀𝑎𝑥 ℎ0 = ∑ 𝑢𝑟 𝑦𝑟𝑗 𝑠 𝑟=1 𝑠. 𝑡. ∑ 𝑣𝑖 𝑥𝑖𝑗 = 1 (1) 𝑚 𝑖=1 arsu/decis. mak. appl. manag. eng. 4 (2) (2021) 106-125 110 ∑ 𝑢𝑟 𝑦𝑟𝑗 𝑠 𝑟=1 − ∑ 𝑣𝑖 𝑥𝑖𝑗 ≤ 0 𝑚 𝑖=1 , 𝑗 = 1, … , 𝑛 𝑢𝑟 ≥ 0 𝑣𝑖 ≥ 0 where; j is the number of decision-making units (dmu), r is the number of outputs, i is the number of inputs, 𝑦𝑟𝑗 is the value of the rth output for the jth dmu, 𝑥𝑖𝑗 is the value of the ith input for the jth dmu, 𝑢𝑟 is the weight of the rth output, 𝑣𝑖 is the weight of the ith input and ℎ0 refers to relative efficiency. in this model, any dmu must be ℎ0 = 1 in order to be effective (charnes et al., 1978; despic et al., 2019) . although classical dea is an efficiency measurement method, li & reeves’ (1999) mcdea model is based on ineffectiveness. 𝑑0, which is limited to the [0, 1] range can be considered a measurement of “ineffectiveness” and is defined as ℎ0 = 1 − 𝑑0. therefore the smaller the 𝑑0 value, the less ineffective (and therefore more effective) dmu is. in the method of li & reeves (1999), besides the minimization of d0, which is the measure of ineffectiveness, there are two independent objective functions, namely, minimizing maximum deviation and minimizing the sum of deviations. their model is as follows: 𝑀𝑖𝑛 𝑑0 (𝑜𝑟 max ℎ0 = ∑ 𝑢𝑟 𝑦𝑟𝑗0) 𝑠 𝑟=1 𝑀𝑖𝑛 𝑀 𝑀𝑖𝑛 ∑ 𝑑𝑗 (2) 𝑛 𝑗=1 𝑠. 𝑡. ∑ 𝑣𝑖 𝑥𝑖𝑗0 = 1 𝑚 𝑖=1 ∑ 𝑢𝑟 𝑦𝑟𝑗 − ∑ 𝑣𝑖 𝑥𝑖𝑗 + 𝑑𝑗 = 0 𝑚 𝑖=1 𝑠 𝑟=1 𝑀 − 𝑑𝑗 ≥ 0 𝑗 = 1, … . . , 𝑛 𝑢𝑟 , 𝑣𝑖 , 𝑑𝑗 ≥ 0 the mcdea model was proposed primarily as a tool for improving the discrimination power of the classical dea model. in the solution procedure, mcdea was proposed as an interactive approach to solve three objectives. the first objective accommodates the classical ria solution within a set of mcdea solutions. the other two objectives, minimax and minsum, provide more restrictive or lax efficiency solutions, respectively. this model proves that a wider solution is possible to achieve more reasonable input and output weights (ghasemi et al., 2014). 3.2. a bi-objective multiple criteria data envelopment analysis (bio-mcdea) the mcdea model consists of three independent objective functions: 𝑀𝑖𝑛 𝑑0 , 𝑀𝑖𝑛 𝑀 and 𝑀𝑖𝑛 ∑ 𝑑𝑗𝑗 as defined in model 2. in a weighted model, these three independent objective functions can be weighted as 𝑤1𝑑0 + 𝑤2𝑀 + 𝑤3 ∑ 𝑑𝑗𝑗 into a single-objective problem. different efficiency scores can be achieved by changing the investigation into the efficiencies of european football clubs with bi-objective multi-criteria data envelopment analysis 111 weights 𝑤𝑖 (𝑖 = 1,2,3). however, since the first objective function (𝑀𝑖𝑛 𝑑0 ) has the same meaning as the classical dea model, it can be removed from the mcdea model as the discrimination power of the second (𝑀𝑖𝑛 𝑀) and third (𝑀𝑖𝑛 ∑ 𝑑𝑗𝑗 ) objective functions has been proved to be higher than the 𝑀𝑖𝑛 𝑑0 objective (li & reeves, 1999; san cristobal, 2011; hatami-marbini & toloo, 2017). therefore, only the 𝑀𝑖𝑛 𝑀 and 𝑀𝑖𝑛 ∑ 𝑑𝑗𝑗 objectives are weighted in the bio-mcdea model, which is shown below: 𝑀𝑖𝑛 ℎ = (𝑤2𝑀 + 𝑤3 ∑ 𝑑𝑗 𝑗 ) 𝑠. 𝑡. ∑ 𝑣𝑖 𝑥𝑖𝑗0 = 1 𝑚 𝑖=1 (3) ∑ 𝑢𝑟 𝑦𝑟𝑗 − ∑ 𝑣𝑖 𝑥𝑖𝑗 + 𝑑𝑗 = 0 𝑚 𝑖=1 𝑠 𝑟=1 𝑀 − 𝑑𝑗 ≥ 0 𝑗 = 1, … . . , 𝑛 𝑢𝑟 ≥ 𝜀, 𝑟 = 1,2, … , 𝑠 𝑣𝑖 ≥ 𝜀, 𝑖 = 1,2, … , 𝑚 𝑑𝑗 ≥ 0 𝑗 = 1,2, … , 𝑛 the constraints of the bio-mcdea model consist of the same constraints as the mcdea model of li & reeves (1999). only the 𝑢𝑟 and 𝑣𝑖 variables are constrained by the constant 𝜀. although ghasemi et al. (2014) used 𝜀 = 0,0001 in the samples they solved, they did not suggest an approach to find a suitable value for the constant 𝜀. in addition, the bio-mcdea model is still robust if 𝜀 = 0 in the sample solved using a different data set. in this study, 𝜀 = 0 was used as in the original model. 3.3. data the data of this study was obtained from the 2020, 2019, 2018, 2017 and 2016 deloitte football money league reports and transfermrkt.com, which regularly collects data on the european football industry every year. in the study, the number of social media followers (v1), average number of viewers (v2) and total market value (v3) were used as input, while the uefa club score (u1) and total revenue (u2) variables were used as output. in their studies aichner (2018), alaminos et al. (2020) and weimar et al. (2021) used number of social media followers, haas (2003a), pestana barros et al. (2010), kern et al. (2012), alaminos et al. (2020) used average number of viewers, kulikova & goshunova (2014) and rubem & brandao (2015) used total market value, rubem & brandao (2015) used uefa club score, halkos & tzeremes (2013), kulikova & goshunova (2014), jardin (2009), guzman & morrow (2007), pestana barros et al. (2010), kern et al. (2012), chelmis et al. (2019) and miragaia et al. (2019) used the total revenue of the club as input or output variable. the definitions of the input and output variables are shown in table 1. arsu/decis. mak. appl. manag. eng. 4 (2) (2021) 106-125 112 table 1. bio-mcdea model input and output variable definitions variables definition the number of social media followers (𝑣1) the number of people following the clubs on facebook, instagram and twitter (*106) average number of viewers (𝑣2) the average number of people who came to the stadium as a spectator in matches hosted by clubs total market value(𝑣3) the sum of the market values of the club's footballers (*106 €) uefa club score (𝑢1) the total points the club has obtained from all matches during a season total revenue (𝑢2) the sum of club's matchday revenues, broadcasting revenues and commercial revenues (*106 €) pearson correlation coefficients are widely used when choosing input and output in dea (lewin et al., 1982; thanassoulis et al., 1987; golany & roll, 1989; friedman & sinuany-stern, 1998; dyson et al., 2001). lewin et al. (1982) argued that inputs should not be highly correlated with other inputs and outputs should not be highly correlated with other outputs. they also stated that if the inputs and outputs are negatively correlated with each other, these variables may be excluded from the model since the increase in inputs will affect the output negatively. the pearson correlation coefficients of the data used in the present study are shown in table 2. table 2. pearson correlation coefficients for bio-mcdea model inputs and outputs v1 v2 v3 u1 u2 v1 1 v2 0.578 1 v3 0.448 0.136 1 u1 0.179 0.209 0.219 1 u2 0.781 0.712 0.534 0.205 1 according to the results given in table 2, none of the pearson correlation coefficients had a very high, very low or negative value. therefore, no input or output variable was excluded from the model. in this study, an analysis of the efficiency of five seasons of 10 top european football clubs was performed. the values of input and output variables selected to determine the effectiveness of the 2015-2016, 2016-2017, 2017-2018, 2018-2019 and 20192020 seasons are shown in table 3. investigation into the efficiencies of european football clubs with bi-objective multi-criteria data envelopment analysis 113 table 3. input and output values of the bio-mcdea model 2019-2020 season football clubs v1 v2 v3 u1 u2 manchester united 127.2 74698 670.45 22000 711.5 real madrid 226.7 61040 913.75 17000 757.3 fc barcelona 216.5 76104 930.93 24000 840.8 bayern munich 74.4 75865 777.33 36000 660.1 manchester city 62.9 54130 1048.6 25000 610.6 arsenal 69.7 59897 607.65 10000 445.6 psg 73.7 46911 874.15 31000 635.9 chelsea 82.2 40445 705.85 17000 513.1 liverpool 71.9 53053 1002.7 18000 604.7 juventus 83.4 39101 661.88 22000 459.7 2018-2019 season manchester united 117.2 75102 797.6 19000 666 real madrid 207.8 66337 1033.1 19000 750.9 fc barcelona 195.5 70872 1201.4 30000 690.4 bayern munich 68.9 75354 784.88 20000 629.2 manchester city 53.1 54054 1203.4 25000 568.4 arsenal 64.7 59323 659.05 26000 439.2 psg 60.4 46929 1009.9 19000 541.7 chelsea 74.4 41281 1166.6 30000 505.7 liverpool 54.8 52958 1172.4 29000 513.7 juventus 63.1 36510 871.05 21000 394.9 2017-2018 season manchester united 110.2 75305 645.10 28985 676.3 real madrid 189.7 69426 716.2 37028 674.6 fc barcelona 184.3 78678 772.5 27028 648.3 bayern munich 59.5 75000 610.25 24914 587.8 manchester city 41 54019 616.35 20985 527.7 arsenal 61.2 59957 633.90 21985 487.6 psg 49.9 45160 581.10 22883 486.2 chelsea 69.9 41532 642.15 2985 428 liverpool 45.3 53094 495 2985 424.2 juventus 45.2 37195 540.53 35850 405.7 2016-2017 season manchester united 97.4 75327 533.25 15850 689 real madrid 158.4 71280 743.1 37785 620.1 fc barcelona 159.1 79724 787.2 30785 620.2 bayern munich 52.3 75017 595.4 32285 592 manchester city 30.7 54013 621.4 28850 524.9 arsenal 55 59980 522.75 17850 468.5 psg 37.5 46160 502.05 26216 520.9 chelsea 63.3 41500 603.3 20850 447.4 arsu/decis. mak. appl. manag. eng. 4 (2) (2021) 106-125 114 liverpool 39.8 44108 394.15 24850 403.8 juventus 34.6 39106 463.78 20300 341.1 2015-2016 season manchester united 83.1 75335 374.15 2714 519.5 real madrid 128.9 72969 700.75 33042 577 fc barcelona 132.8 77632 618.5 38042 560.8 bayern munich 41.5 72882 608.5 31171 474 manchester city 25.3 45345 452.75 17714 463.5 arsenal 46.4 59992 408.6 22714 435.5 psg 28.9 45789 433.3 23183 480.8 chelsea 56.1 41546 579.8 23714 420 liverpool 34.5 44675 325 12714 391.8 juventus 26.3 36292 394.33 32800 323.9 the reason why the 10 clubs were included in the efficiency evaluation is that these 10 clubs were ranked in the top 10 for five seasons in the deloitte football money league report, which was the main data source of this study. the deloitte football money league report publishes data for the 20 top financially successful clubs each season. however, 10 clubs other than the top 10 change at a certain rate each year. as data of some of the clubs other than the top 10 clubs from different sources could harm the homogeneity of the data, the clubs not included in the top 10 clubs were excluded from the scope of the study. 4. results the mcdea and bio-mcdea efficiency scores of the football clubs were calculated using lindo w32 software. the first three columns in table 4 are the efficiency results of the mcdea model solution. the fourth column consists of efficiency values obtained as a result of the bio-mcdea model solution. the last column refers to the ranking of the football clubs according to the results of the efficiency values obtained with the bio-mcdea model solution. the efficient football clubs (eff. 1) were ranked first, while the other clubs were ranked in order after that. table 4. bio-mcdea model efficiency scores. football clubs classical dea/min d0 min m min ∑d biomcdea rank 2 0 1 9 -2 0 2 0 s e a so n manchester united 1 1 1 1 1 real madrid 1 0.902 0.852 0.890 5 fc barcelona 1 0.911 0.886 0.917 2 bayern munich 0.988 0.890 0.834 0.834 7 manchester city 0.996 0.878 0.859 0.859 6 arsenal 0.826 0.843 0.800 0.800 8 psg 1 1 1 1 1 chelsea 1 1 1 1 1 liverpool 0.924 0.929 0.908 0.908 3 investigation into the efficiencies of european football clubs with bi-objective multi-criteria data envelopment analysis 115 juventus 0.928 0.852 0.891 0.891 4 2 0 1 8 -2 0 1 9 s e a so n manchester united 1 1 1 1 1 real madrid 1 1 1 1 1 fc barcelona 0.965 0.940 0.931 0.941 5 bayern munich 1 1 1 1 1 manchester city 1 0.943 0.922 0.922 7 arsenal 1 0.996 0.994 1 1 psg 1 0.977 0.995 0.981 3 chelsea 1 1 0.986 0.986 2 liverpool 0.996 0.931 0.934 0.934 6 juventus 0.953 0.943 0.953 0.946 4 2 0 1 7 -2 0 1 8 s e a so n manchester united 1 1 1 1 1 real madrid 1 0.965 0.972 0.965 4 fc barcelona 0.866 0.863 0.866 0.863 6 bayern munich 1 0.883 0.896 0.896 5 manchester city 1 0.979 0.990 0.990 2 arsenal 0.849 0.835 0.827 0.827 8 psg 1 1 1 1 1 chelsea 1 0.864 1 1 1 liverpool 1 0.979 0.969 0.969 3 juventus 1 0.946 0.828 0.828 7 2 0 1 6 -2 0 1 7 s e a so n manchester united 1 0.810 0.992 0.992 2 real madrid 0.944 0.844 0.844 0.844 4 fc barcelona 0.731 0.731 0.731 0.731 9 bayern munich 1 0.922 0.922 0.922 3 manchester city 1 0.837 0.837 0.837 5 arsenal 0.825 0.783 0.783 0.783 6 psg 1 1 1 1 1 chelsea 0.955 0.744 0.744 0.744 7 liverpool 1 1 1 1 1 juventus 0.913 0.738 0.738 0.738 8 2 0 1 5 -2 0 1 6 s e a so n manchester united 1 0.671 0.562 0.662 8 real madrid 0.798 0.791 0.798 0.798 5 fc barcelona 0.936 0.903 0.936 0.936 3 bayern munich 0.789 0.766 0.788 0.788 6 arsu/decis. mak. appl. manag. eng. 4 (2) (2021) 106-125 116 manchester city 1 0.871 0.852 0.852 4 arsenal 1 0.909 1 1 1 psg 1 1 1 1 1 chelsea 1 0.726 0.673 0.673 7 liverpool 1 0.934 0.983 0.983 2 juventus 1 1 1 1 1 it can be seen from table 4 that psg is the only club that was efficient for all three seasons. however, the ranking of other clubs according to the bio-mcdea model differed for each season. for instance, manchester united ranked first in the 20172018 season, second in the 2016-2017 season and eighth in the 2015-2016 season. this shows that the financial and sporting success of the clubs affects their rankings in different seasons. spearman rank correlation was commonly used in the literature to test the relationship between dmu rankings (haas et al., 2004; bal et al., 2010; örkcü & bal, 2011). in this study, the relationship between the ranks determined for three different seasons as a result of the bio-mcdea model was tested with spearman rank correlation. when the results of the spearman rank correlation were examined, no statistically significant relationship was found between the rankings for the three seasons. this result supports the idea that the financial and sporting achievements of the clubs affect their rankings in different seasons. in other words, the clubs achieved a ranking according to how successful they were in sports or financial terms. although the model was created for three consecutive seasons, the rankings differed greatly. the spearman correlation values are shown in table 5. table 5. biomcdea efficiency ranking spearman rank correlations values seasons 2015-2016 2016-2017 2017-2018 2018-2019 2019-2020 2015-2016 1.000 2016-2017 0.529 1.000 2017-2018 -0.503 -0.080 1.000 2018-2019 -0.354 0.132 -0.076 1.000 2019-2020 -0.242 0.160 0.714 -0.146 1.000 as can be seen from the figure, fc barcelona ranked third in 2015-2016, ninth in 2016-2017 and sixth in 2017-2018. fc barcelona was the second club with the highest total market value among the clubs included in the analysis of the 2015-2016 season. it was also the second club with the highest total revenue in the same season. this was reflected in their sporting success as they reached the highest uefa score among the clubs involved in the analysis. total revenue, total market value and uefa club points placed fc barcelona in third place in the bio-mcdea model. however, although fc barcelona seemed to be the most valuable club in terms of total market value in the 2016-2017 and 2017-2018 seasons, it was observed that the quality of the footballers was not sufficient to increase their uefa club points and total revenue. due to this result, fc barcelona ranked lower in the 2016-2017 and 2017-2018 seasons according to the bio-mcdea model. investigation into the efficiencies of european football clubs with bi-objective multi-criteria data envelopment analysis 117 5. sensitivity analysis these remarkable results raise the question of how much input and output variables contribute to the model when determining the bio-mcdea model ranking. therefore, a sensitivity analysis was performed to determine which input or output contributed to the model. to determine the contribution of each input and output variable, bio-mcdea efficiency values including all the variables and bio-mcdea efficiency values calculated by excluding each input and output variable were examined. in addition, pearson correlation coefficients were examined to determine the relationship between the efficiency values of the model including all input and output variables and the efficiency values when each variable was excluded from the model. the sensitivity analysis results and pearson correlation coefficients are shown in table 6. table 6. sensitivity analysis results for bio-mcdea model variables football clubs biomcdea without v1 (r1= 0.943*) without v2 (r2= 0.334) without v3 (r3= 0.302) without u1 (r4= 0.760*) without u2 (r5= 0.220) 2 0 1 9 -2 0 2 0 s e a so n manchester uni 1 1 1 0.795 1 0.660 real madrid 0.890 1 0.687 0.880 0.864 0.476 fc barcelona 0.917 1 0.757 0.821 0.996 0.600 bayern munich 0.834 0.893 1 0.621 0.877 1 manchester city 0.859 0.804 0.758 0.931 0.818 0.675 arsenal 0.800 0.725 0.822 0.702 0.750 0.350 psg 1 0.990 0.883 1 1 0.992 chelsea 1 0.954 0.811 1 0.965 0.663 liverpool 0.908 0.809 0.758 0.998 0.836 0.504 juventus 0.891 0.918 0.757 0.809 0.908 0.902 2 0 1 8 -2 0 1 9 s e a so n manchester uni 1 0.968 0.839 0.774 1 0.589 real madrid 1 1 0.454 0.839 1 0.361 fc barcelona 0.941 0.965 0.511 0.730 0.854 0.780 bayern munich 1 0.945 1 0.723 1 0.620 manchester city 0.922 0.888 0.654 0.887 0.913 0.750 arsenal 1 1 1 0.616 0.855 0.988 psg 0.981 0.925 0.684 1 1 0.665 chelsea 0.986 1 0.654 0.997 0.895 0.990 liverpool 0.934 0.893 0.653 0.794 0.842 0.870 juventus 0.946 0.937 0.663 0.887 0.867 0.882 2 0 1 7 -2 0 1 8 manchester uni 1 1 0.996 0.849 1 0.522 real madrid 0.965 0.972 0.781 0.861 0.998 0.259 fc barcelona 0.863 0.866 0.742 0.779 0.867 0.189 bayern munich 0.896 0.896 0.990 0.741 0.891 0.349 manchester city 0.990 0.990 0.925 0.913 0.961 0.504 arsenal 0.827 0.827 0.801 0.751 0.829 0.528 psg 1 1 0.866 1 1 0.595 chelsea 1 1 0.763 1 0.870 0.069 liverpool 0.969 0.969 1 0.775 0.860 0.090 juventus 0.828 0.828 0.687 0.959 0.952 1 2 0 1 6 -2 0 1 7 manchester uni 0.992 0.993 1 0.810 1 0.380 real madrid 0.844 0.844 0.719 0.771 0.789 0.680 fc barcelona 0.731 0.731 0.664 0.678 0.726 0.677 bayern munich 0.922 0.923 0.940 0.699 0.819 0.775 manchester city 0.837 0.838 0.782 0.861 0.835 0.750 arsenal 0.783 0.784 0.764 0.692 0.776 0.536 psg 1 1 0.940 1 1 0.996 arsu/decis. mak. appl. manag. eng. 4 (2) (2021) 106-125 118 chelsea 0.744 0.744 0.672 0.955 0.808 0.375 liverpool 1 1 0.992 0.811 0.898 0.994 juventus 0.738 0.739 0.704 0.773 0.737 0.711 2 0 1 5 -2 0 1 6 manchester uni 0.662 0.662 0.895 0.639 0.657 0.041 real madrid 0.798 0.798 0.803 0.763 0.745 0.628 fc barcelona 0.936 0.936 0.937 0.705 0.775 0.761 bayern munich 0.788 0.788 0.790 0.631 0.676 0.623 manchester city 0.852 0.852 0.859 0.961 0.937 0.478 arsenal 1 1 0.995 0.693 0.864 0.704 psg 1 1 1 0.992 1 0.649 chelsea 0.673 0.673 0.682 0.957 0.719 0.427 liverpool 0.983 0.983 0.969 0.822 1 0.470 juventus 1 1 0.996 0.889 0.769 0.984 when the results of the sensitivity analysis were examined, a significant correlation was observed between the bio-mcdea efficiency scores, which included all inputs and outputs, and the bio-mcdea efficiency scores, where two inputs and one output were excluded from the model. in particular, when the number of social media followers (v1) input variable was excluded from the model, an excellent correlation (r1=0.943) was observed between the obtained efficiency values and the activity values in which all variables were included in the model. that is to say, this variable did not contribute to the model. in the same way, a statistically significant and strong relationship (r4=0.760) was observed between the efficiency values obtained by excluding the uefa club score output (u1), and the bio-mcdea model in which all variables were included. it was found that these variables did not contribute to the model. the biggest contribution to the model was average number of viewers (v2) and the total market value (v3) inputs and the total revenue (u2) output. in other words, mainly the financial variables influenced the ranking of the bio-mcdea model of the clubs. 6. discussion in the analysis made using the data of the 2015-2016, 2016-2017, 2017-2018, 2018-2019 and 2019-2020 seasons, a comprehensive assessment was made by using the number of social media followers, average number of viewers, total market values, uefa club scores and total revenues. when the mcdea model min d0, which gives the same results with classical output-oriented dea efficiency values, and bio-mcdea model efficiency values were compared, it was concluded that the bio-mcdea model improved the discrimination power. this was because while seven clubs in the 20152016 season, five clubs in the 2016-2017 season, eight clubs in the 2017-2018 season, seven clubs in the 2018-2019 season and five clubs in the 2019-2020 season were efficient according to the classical dea model, only three clubs in the 2015-2016 season, two clubs in the 2016-2017 season, three clubs in the 2017-2018 season, four clubs in the 2018-2019 season and three clubs in the 2019-2020 season were efficient according to the bio-mcdea model. according to the results, arsenal, psg and juventus emerged as the efficient clubs in the 2015-2016 season, psg and liverpool emerged as the efficient clubs in the 2016-2017 season, manchester united, psg and chelsea emerged as the efficient clubs in the 2017-2018 season, manchester united, real madrid, bayern munich and arsenal emerged as the efficient clubs in the 20182019 season and manchester united, psg and chelsea emerged as the efficient clubs in the 2019-2020 season. psg was determined as an efficient club in four out of five seasons included in the analysis. in other words, psg was the most successful club among the analyzed clubs. investigation into the efficiencies of european football clubs with bi-objective multi-criteria data envelopment analysis 119 this may be attributed to the sale of the club to a qatari fund group in 2011. correspondingly, the market value of the club increased with the large expenditures made for transfers immediately after the sale. with this acceleration, the club which had only won two championships in the france ligue 1 since 1970, became the champion seven times in eight seasons after the 2012 season. the club, which became the champion almost every season after the 2012 season, increased its uefa club points by joining the champions league every year and achieved a financially stable structure. in other words, the club, which was financially supported after the 2011 season, increased its sporting achievements, which in turn stabilized its financial support. among the 10 clubs included in the analysis, manchester city, which was purchased by a funding organization like in the case of psg, was not as efficient as psg according to the bio-mcdea model. in the 2008 season when manchester city was acquired, it was financially supported, similar to psg. however, manchester city has not been as successful as psg. the reason for this is that manchester city cannot dominate the premier league as psg dominates ligue 1, as 5 of the top 10 clubs at the top of the big-five league compete in the premier league. this suggests that financial support alone does not have an impact on success. in this study, a sensitivity analysis was performed to measure the sensitivity of efficiency measurement results according to different input/output combinations. each input and output was removed from the model, which was then resolved and the behavior of the model against the extracted variable was monitored. while a perfect correlation was found between the model created by subtracting the “the number of social media followers” input and the original bio-mcdea model, statistically significant correlations were observed between the model created by subtracting the “uefa club scores” output from the model and the bio-mcdea model. this means that while the “the number of social media followers” input makes almost no contribution to the model, the “uefa club scores” output provide relatively less contribution to the model than other inputs and outputs. no relationship was found between the original model and the bio-mcdea model created by excluding the “average number of viewers” and the “total market values” inputs and the “total revenues” output from the model. these variables were determined as the main determinants of the model. this suggests that the variables that contribute to the model are mainly financial variables. however, especially considering the use of social media in the 21st century, it is noteworthy that the “the number of social media followers” variable is not determinative in terms of the model. 7. conclusion the purpose of this study was to determine the efficiencies of the top 10 clubs in the big-five league, which make up the largest share of the world football industry. the analysis of efficiency for only 10 clubs can be counted among the limitations of this study. the reason for the inclusion of these 10 clubs in the efficiency review was that although the rankings of the clubs have changed, they are still in the top 10. the deloitte football money league report, from which most of the data in this study was obtained, publishes data on the top 20 clubs in terms of finance every year. while the clubs in the top 10 almost never change, the clubs in the last 10 can enter and exit the list. only 10 clubs were included in the analysis to obtain consistent data over the entire five years of the analysis. another limitation of this study was that the analysis was carried out with only quantitative data. however, this analysis could be supported arsu/decis. mak. appl. manag. eng. 4 (2) (2021) 106-125 120 by qualitative data obtained from football professionals including club managers, sponsors, etc. in future studies, the number of football clubs included in the analysis can be increased by using more resources and time, and the obtained quantitative findings can be supported by qualitative findings. the bio-mcdea model, which is an efficiency determination method based on linear programming, was used in the efficiency analysis. it can be said that using this model was the most obvious advantage of the study. the reason for selecting the biomcdea model was to prevent the low discrimination problem of classical dea. the findings of the study also included the results obtained with classical dea. when the classical dea findings were examined, it was concluded that a very high number of clubs were efficient. in this case, it will be difficult to distinguish between clubs. moreover, useful information for decision-makers cannot be obtained. "super efficiency" models can be used to determine which of the efficient clubs are more efficient. in this case, it will produce more complex results for both decision makers and analysts. in addition, the bio-mcdea model has easier solution steps compared to other methods such as mcdea and gpdea that aim to eliminate the low discrimination power problem of classical dea. another advantage of this study is that sports and financial data were used together. this is because financial success is to be used as leverage for sportive success. in this respect, instead of evaluating and associating financial and sports data separately, this study included both in the same model. as financial and sportive success can only be achieved through successful management practices, some managerial implications were made in line with the findings of the study. in order to examine the contribution of the criteria to the model, a sensitivity analysis was conducted in which each criterion was removed from the model, which was then solved again. according to the results of this analysis, the criteria of average audience number, total market value and total income were determined as the criteria that made the greatest contribution to the model. although the criteria for total market value and total income are direct financial criteria, the average number of viewers seems to be a non-financial criterion. this is because the matchday revenues are at the lower ranks among the revenue items of football clubs. however, bringing fans to the stadium does not only contribute to the clubs as ticket revenue but also to the sales in commercial products and to sponsors spending more on stadium advertisements. in addition, the fact that football clubs achieve more sportive success in the home field can be explained with the support of the fans. from this point of view, club management can implement various practices to make stadiums more attractive to the fans. among the practices that increase the attractiveness of stadiums are the club management selling tickets at lower prices, facilitating access to the stadium, and creating areas where families can spend time in the stadium. although the 10 clubs analyzed are not in the same league, they are constantly in competition as they participate in international tournaments every year. in order to keep competition alive, the financial resource must be sustainable. in order for the financial resource to be sustainable, clubs want to continuously participate in international tournaments, which are one of the most revenue generating elements of the industry. the financial benefits of a successful season will only benefit the club in the next season. although real madrid was champion in the champions league in the 2017-2018 season, according to the analysis conducted in this study, it was found to be an efficient club in the 2018-2019 season, not the 2017-2018 season. similarly, chelsea, which was champion in the european league in 2018-2019, was only found to be an efficient club in the 2019-2020 season. as these examples show, clubs can investigation into the efficiencies of european football clubs with bi-objective multi-criteria data envelopment analysis 121 only provide sustainable financial resources with sustainable sportive success. moreover, they can transfer talented players who can participate in international tournaments every year to make financial resources sustainable, or they can invest in their academies to produce their own qualified football players. clubs that make their financial resources sustainable are referred to as "big clubs". it can be said that achieving sportive success is easier for these clubs compared to other clubs, as big clubs are more advantageous in terms of attracting talented and qualified players. however, financial sustainability depends on a number of factors that are not constantly under control. for example, some penalties imposed by uefa in accordance with ffp policies harm the financial sustainability of clubs. in addition to the clubs' efforts to cope with the ffp limitations, the covid-19 pandemic, which emerged in the province of wuhan in china in december 2019 and spread all over the world in a short time, led to huge decreases in the revenues of the clubs. due to covid19, some countries have suspended their leagues for a long period of time, broadcasting agreements were interrupted and stadium revenues were not obtained. to avoid the effects of factors such as these that could harm financial sustainability, clubs sometimes turn to finding new sources of funding. for example, clubs may try to provide additional financing with initiatives such as the "european super league", which was established on april 19, 2021 and was dissolved after only 48 hours. however, for a sport whose rules and organizations are deeply rooted, such initiatives may cause clubs to disconnected from other clubs. therefore, in order to make the financial resource sustainable, clubs should make their sportive success sustainable in every platform. funding: this research received no external funding. conflicts of interest: the author declare no conflicts of interest. references andrade, r. m. d., lee, s., lee, p. t. w., kwon, o. k., & chung, h. m. (2019). port efficiency incorporating service measurement variables by the bio-mcdea: brazilian case. sustainability, 11(16), 4340. https://doi.org/10.3390/su11164340 angulo-meza, l., gonzález-araya, m., iriarte, a., rebolledo-leiva, r., & de mello, j. c. s. (2019). a multiobjective dea model to assess the eco-efficiency of agricultural practices within the cf+ dea method. computers and electronics in agriculture, 161, 151-161. anthony, p., behnoee, b., hassanpour, m., & pamucar, d. (2019). financial performance evaluation of seven indian chemical companies. decision making: applications in management and engineering, 2(2), 81-99. https://doi.org/10.31181/dmame1902021a bal, h., örkcü, h. h., & çelebioğlu, s. (2010). improving the discrimination power and weights dispersion in the data envelopment analysis. computers & operations research, 37(1), 99-107. https://doi.org/10.1016/j.cor.2009.03.028 blagojević, a., vesković, s., kasalica, s., gojić, a., & allamani, a. (2020). the application of the fuzzy ahp and dea for measuring the efficiency of freight transport railway arsu/decis. mak. appl. manag. eng. 4 (2) (2021) 106-125 122 undertakings. operational research in engineering sciences: theory and applications, 3(2), 1-23. https://doi.org/10.31181/oresta2003001b charnes, a., cooper, w. w., & rhodes, e. (1978). measuring the efficiency of decision making units. european journal of operational research, 2(6), 429-444. https://doi.org/10.1016/0377-2217(78)90138-8 chelmis, e., niklis, d., baourakis, g., & zopounidis, c. (2019). multiciteria evaluation of football clubs: the greek superleague. operational research, 19(2), 585-614. https://doi.org/10.1007/s12351-017-0300-2 da silva, a. f., marins, f. a. s., tamura, p. m., & dias, e. x. (2017). bi-objective multiple criteria data envelopment analysis combined with the overall equipment effectiveness: an application in an automotive company. journal of cleaner production, 157, 278-288. https://doi.org/10.1016/j.jclepro.2017.04.147 deloitte. football money league report (2016). https://www2.deloitte.com/me/en/pages/consumer-business/articles/deloittefootball-money-league1.html accessed 13 march 2020 deloitte. football money league report (2017). https://www2.deloitte.com/tr/en/pages/consumer-industrialproducts/articles/deloitte-football-money-league.html accessed 13 march 2020 deloitte. football money league report (2018). https://www2.deloitte.com/mk/en/pages/consumer-business/articles/deloittefootball-money-league2.html accessed 13 march 2020 deloitte. football money league report (2019). https://www2.deloitte.com/global/en/pages/consumer-business/articles/deloittefootball-money-league.html 1 may 2021 deloitte. football money league report (2020). https://www2.deloitte.com/bg/en/pages/finance/articles/football-money-league2020.html accessed 1 may 2021 despic, d., bojovic, n., kilibarda, m. & kapetanovic, m. (2019). assessment of efficiency of military transport units using the dea and sfa methods. military technical courier, 67(1), 68–92. https://doi.org/10.5937/vojtehg67-18508. dobson, s. & goddard, j. (2011). the economics of football (second edition). new york: cambridge university press. dyson, r. g., allen, r., camanho, a. s., podinovski, v. v., sarrico, c. s., & shale, e. a. (2001). pitfalls and protocols in dea. european journal of operational research, 132(2), 245-259. https://doi.org/10.1016/s0377-2217(00)00149-1 friedman, l., & sinuany-stern, z. (1998). combining ranking scales and selecting variables in the dea context: the case of industrial branches. computers & operations research, 25(9), 781-791. https://doi.org/10.1016/s0305-0548(97)00102-0 galariotis, e., germain, c., & zopounidis, c. (2018). a combined methodology for the concurrent evaluation of the business, financial and sports performance of football clubs: the case of france. annals of operations research, 266(1), 589-612. https://doi.org/10.1007/s10479-017-2631-z investigation into the efficiencies of european football clubs with bi-objective multi-criteria data envelopment analysis 123 ghasemi, m. r., ignatius, j., & emrouznejad, a. (2014). a bi-objective weighted model for improving the discrimination power in mcdea. european journal of operational research, 233(3), 640-650. https://doi.org/10.1016/j.ejor.2013.08.041 ghofran, a., sanei, m., tohidi, g., & bevrani, h. (2021). applying mcdea models to rank decision making units with stochastic data. international journal of industrial mathematics, 13(2), 101-111. golany, b., & roll, y. (1989). an application procedure for dea. omega, 17(3), 237-250. https://doi.org/10.1016/0305-0483(89)90029-7 guzmán, i., & morrow, s. (2007). measuring efficiency and productivity in professional football teams: evidence from the english premier league. central european journal of operations research, 15(4), 309-328. https://doi.org/10.1007/s10100-007-0034y haas, d. j. (2003a). productive efficiency of english football teams—a data envelopment analysis approach. managerial and decision economics, 24(5), 403-410. https://doi.org/10.1002/mde.1105 haas, d. j. (2003b). technical efficiency in the major league soccer. journal of sports economics, 4(3), 203-215. https://doi.org/10.1177/1527002503252144 haas, d., kocher, m. g., & sutter, m. (2004). measuring efficiency of german football teams by data envelopment analysis. central european journal of operations research, 12(3), 251-268. halkos, g. e., & tzeremes, n. g. (2013). a two‐stage double bootstrap dea: the case of the top 25 european football clubs' efficiency levels. managerial and decision economics, 34(2), 108-115. https://doi.org/10.1002/mde.2597 hassanpour, m. (2020). evaluation of iranian small and medium-sized industries using the dea based on additive ratio model–a review. facta universitatis, series: mechanical engineering, 18(3), 491-511. https://doi.org/10.22190/fume200426030h hatami-marbini, a., & toloo, m. (2017). an extended multiple criteria data envelopment analysis model. expert systems with applications, 73, 201-219. https://doi.org/10.1016/j.eswa.2016.12.030 jardin, m. (2009). efficiency of french football clubs and its dynamics. munich personal repec archive. https://mpra.ub.uni-muenchen.de/19828/ accessed 18 june 2020. https://mpra.ub.uni-muenchen.de/19828/1/mpra_paper_19828.pdf kamarudin, f., sufian, f., nassir, a. m., anwar, n. a. m., & hussain, h. i. (2019). bank efficiency in malaysia a dea approach. journal of central banking theory and practice, 8(1), 133-162. https://doi.org/10.2478/jcbtp-2019-0007 kern, a., schwarzmann, m., & wiedenegger, a. (2012). measuring the efficiency of english premier league football. sport, business and management: an international journal, 2(3), 177-195. https://doi.org/10.1108/20426781211261502 kohl, s., schoenfelder, j., fügener, a., & brunner, j. o. (2019). the use of data envelopment analysis (dea) in healthcare with a focus on hospitals. health care management science, 22(2), 245-286. https://doi.org/10.1007/s10729-018-9436-8 https://mpra.ub.uni-muenchen.de/19828/ arsu/decis. mak. appl. manag. eng. 4 (2) (2021) 106-125 124 kulikova, l. i., & goshunova, a. v. (2014). efficiency measurement of professional football clubs: a non-parametric approach. life science journal, 11(11), 117-122. lewin, a. y., morey, r. c., & cook, t. j. (1982). evaluating the administrative efficiency of courts. omega, 10(4), 401-411. https://doi.org/10.1016/0305-0483(82)90019-6 li, x. b., & reeves, g. r. (1999). a multiple criteria approach to data envelopment analysis. european journal of operational research, 115(3), 507-517. https://doi.org/10.1016/s0377-2217(98)00130-1 lombardi, g. v., stefani, g., paci, a., becagli, c., miliacca, m., gastaldi, m., giannetti, b. f., & almeida, c. m. v. b. (2019). the sustainability of the italian water sector: an empirical analysis by dea. journal of cleaner production, 227, 1035-1043. https://doi.org/10.1016/j.jclepro.2019.04.283 marcén m. (2019) bosman ruling. in: marciano a., ramello g.b. (eds) encyclopedia of law and economics. new york: springer. miragaia, d., ferreira, j., carvalho, a., & ratten, v. (2019). interactions between financial efficiency and sports performance. journal of entrepreneurship and public policy, 8(1), 84-102. https://doi.org/10.1108/jepp-d-18-00060 örkcü, h. h., & bal, h. (2011). goal programming approaches for data envelopment analysis cross efficiency evaluation. applied mathematics and computation, 218(2), 346-356. https://doi.org/10.1016/j.amc.2011.05.070 pestana barros, c. p., & leach, s. (2006). performance evaluation of the english premier football league with data envelopment analysis. applied economics, 38(12), 1449-1458. https://doi.org/10.1080/00036840500396574 pestana barros, c., assaf, a., & sá-earp, f. (2010). brazilian football league technical efficiency: a simar and wilson approach. journal of sports economics, 11(6), 641-651. https://doi.org/10.1177/1527002509357530 pradhan, s., boyukaslan, a., & ecer, f. (2017). applying grey relational analysis to italian football clubs: a measurement of the financial performance of serie a teams. international review of economics and management, 4(4), 1-19. https://doi.org/10.18825/iremjournal.290668 rashidi, k., & cullinane, k. (2019). a comparison of fuzzy dea and fuzzy topsis in sustainable supplier selection: implications for sourcing strategy. expert systems with applications, 121, 266-281. https://doi.org/10.1016/j.eswa.2018.12.025 rossi, g., goossens, d., di tanna, g. l., & addesa, f. (2019). football team performance efficiency and effectiveness in a corruptive context: the calciopoli case. european sport management quarterly, 19(5), 583-604. https://doi.org/10.1080/16184742.2018.1553056 rubem, a. p. s., & brandão, l. c. (2015). multiple criteria data envelopment analysis– an application to uefa euro 2012. procedia computer science, 55, 186-195. https://doi.org/10.1016/j.procs.2015.07.031 sakınç, i̇., açıkalın, s., & soygüden, a. (2017). evaluation of the relationship between financial performance and sport success in european football. journal of physical education and sport, 17(1), 16-22. https://doi.org/10.7752/jpes.2017.s1003 investigation into the efficiencies of european football clubs with bi-objective multi-criteria data envelopment analysis 125 sałabun, w., shekhovtsov, a., pamučar, d., wątróbski, j., kizielewicz, b., więckowski, j., bozanic d., urbaniak, k., & nyczaj, b. (2020). a fuzzy inference system for players evaluation in multi-player sports: the football study case. symmetry, 12(12), 2029. https://doi.org/10.3390/sym12122029 san cristóbal, j. r. (2011). a multi criteria data envelopment analysis model to evaluate the efficiency of the renewable energy technologies. renewable energy, 36(10), 2742-2746. https://doi.org/10.1016/j.renene.2011.03.008 thanassoulis, e., dyson, r. g., & foster, m. j. (1987). relative efficiency assessments using data envelopment analysis: an application to data on rates departments. journal of the operational research society, 38(5), 397-411. https://doi.org/10.1057/jors.1987.68 transfermrkt.com. (2019). https://www.transfermarkt.com/spielerstatistik/wertvollstemannschaften/marktwertetop accessed 26 december 2019 uefa (2021). financial fair play. https://www.uefa.com/insideuefa/protecting-thegame/financial-fair-play/ accessed 2 may 2021. appendix 1. example of lindo codes for bio-mcdea model (manchester united 2019-2020 season) min 0.5m+0.5d1+0.5d2+0.5d3+0.5d4+0.5d5+0.5d6+0.5d7+0.5d8+0.5d9+0.5d10 subject to 127.2x1+74698x2+670.45x3=1 22000y1+711.5y2-127.2x1-74698x2-670.45x3+d1=0 17000y1+757.3y2-226.7x1-61040x2-913.75x3+d2=0 24000y1+840.8y2-216.5x1-76104x2-930.93x3+d3=0 36000y1+660.1y2-74.4x1-75865x2-777.33x3+d4=0 25000y1+610.6y2-62.9x1-54130x2-1048.6x3+d5=0 10000y1+445.6y2-69.7x1-59897x2-607.65x3+d6=0 31000y1+635.9y2-73.7x1-46911x2-874.15x3+d7=0 17000y1+513.1y2-82.2x1-40445x2-705.85x3+d8=0 18000y1+604.7y2-71.9x1-53053x2-1002.7x3+d9=0 22000y1+459.7y2-83.4x1-39101x2-661.88x3+d10=0 m-d1>=0 m-d2>=0 m-d3>=0 m-d4>=0 m-d5>=0 m-d6>=0 m-d7>=0 m-d8>=0 m-d9>=0 m-d10>=0 end © 2018 by the authors. submitted for possible open access publication under the terms and conditions of the creative commons attribution (cc by) license (http://creativecommons.org/licenses/by/4.0/). decision making: applications in management and engineering vol. 1, number 1, 2018, pp. 97-120 issn: 2560-6018 doi: https://doi.org/10.31181/dmame180197k * corresponding author. e-mail addresses: zkaravidic@gmail.com (z. karavidić), damirpro@yahoo.com (d. projović) a multi-criteria decision-making (mcdm) model in the security forces operations based on rough sets zoran karavidić1*, damir projović1 1 university of defence in belgrade, military academy, department of management, belgrade, serbia received: 3 january 2018; accepted: 18 february 2018; published: 15 march 2018. original scientific paper abstract: the paper points to a multi-criteria decision-making model based on the rough set theory application. the model demonstrates exceptional importance of the software application of the rough sets to decision-making in the security forces operations. applying the rough sets represents a useful tool when the data, needed for the decision-making process, include vagueness and uncertainty. by applying the model based on the applicative use of the rough sets, specific decision-making rules are formulated. these rules guide the decision-makers through the complete process of planning the security operations. key words: multi-criteria decision-making, rough sets, course of action, rosetta, rose2. 1. introduction modern international relations are very unpredictable in the political, economic and social life. in such an environment, there is a frequent need for engaging security forces due to the demand for protection of national interests or democratic order. the security forces are engaged in various operations. in recent years, the security forces have often been involved in counterterrorism and counter-insurgency assignments in the world. however, the objective of these operations could also be to support civilians in the case of natural disasters, fight against crime or have various other combat and non-combat engagements involving military, police and other security forces (slavkovic et al., 2012, 2013). the complexity of managing security forces operations, especially of deciding how to use the security forces, represents a major challenge. choosing one from a set of available courses of action (coa) is a part of multi-criteria decision-making (mcdm) process which cannot be avoided. in this respect, the problem is how to choose a coa based on incomplete, inaccurate mailto:zkaravidic@gmail.com mailto:damirpro@yahoo.com karavidić & projović/decis. mak. appl. manag. eng. 1 (1) (2018) 97-120 98 and inseparable data in the security forces operations with the help of various decision-making support models. the significance of this problem is reflected in possible major losses of resources, both human and material. in that sense, every model contributing to a well-timed and better decision made by the managing authorities, will contribute to a more efficient implementation of the security forces operation. so far, the following have been considered for the needs of the security forces: a fuzzy logical system in support to the decision-making process in military organization (pamučar et al., 2011), a hybrid model fahp-mabac in selecting locations for the preparation of laying-up positions (božanić et al., 2016), as well as combined gis and multi-criteria techniques in the selection of sites which are suitable for ammunition depots (gigović et al., 2016). due to secrecy and licenses, various world experiences are rather difficult to access. they are also limited to learning about general settings of functioning. in other areas, applying the rough sets theory (rst) in decision-making focuses its use on modern business environment (shen & chen, 2013, shen, et al., 2017), estimation of bridges construction (kuburić et al., 2012), performance improvement of transportation systems (deshpande & bajaj, 2017) and mining for underground deep-hole mining (jiang et al., 2009). applying the rst is significant in the medical field in preventing diseases (chowdhary & acharjya, 2016) diagnostics (stokić et al., 2010; ji et al., 2012; burney & abbas, 2015), and processing medical data (durairaj & sathyavathi, 2013). the rst has also been used in data mining (greco et al., 2002; jia et al., 2007; chen et al., 2015), with different computer models (dobrilovic et al., 2012). the methods dealing with support in the decision-making operations of security forces choose a coa based on different methodologies of attribute comparison and suggest a given solution to the decision-makers. the application of the model based on the rough sets uses the previously performed security forces operations. by using the software systems with reduction principle, the most important attributes for decision-making are discovered. through decision algorithms, guidelines are given to decision-makers in the decision-making process. the advantage of this model is not only in providing support to the decision-making process in choosing a coa but also in guiding the whole decision-making process. at the same time, a great amount of time is saved. the paper is divided into several sections, namely: section 2 explains the problems of decision-making in a modern security environment, while section 3 presents the basics of the rst. section 4 refers to the existing software systems based on the rst, while section 5 shows the use of the proposed model based on the rst. section 6 gives a discussion of the model results. finally, section 7 presents the conclusions highlighting directions for further research. 2. problems of decision-making in security operations in a modern environment a modern security environment does not represent a precisely defined set of variables. it is an extremely complex part of the society that expresses all its interactions. the use of the security forces in operations is certainly susceptible to the impact of such an environment. each of the possible impacts consists of a subsystem spectrum and contains different interconnections. a great number of factors, which could more or less affect the operation results, emerge from a complex a multi-criteria decision-making (mcdm) model in the security forces operations ... 99 and unpredictable environment. those factors can be observed as criteria or attributes in the decision-making process. persons who decide on the use of force are trying in various ways to make the most appropriate choice among the coas offered. the appropriate decision is often reflected in human lives, and the proper approach is extremely important. such problems represent a major challenge for decision-makers. they are semistructured and unstructured which makes it difficult to solve them. therefore, there is space for implementing different decision-making support models that need to improve the decision-making process. they represent symbiosis of information systems, the application of a set of functional knowledge and the ongoing decisionmaking process (suknović & delibašić, 2010). for their work, they search for a database that forms the source of information, certain model solutions, and the corresponding user interface. the models should improve the knowledge of the decision-maker in order to help him make the right decision. supporting the choice of the coa in security forces operations is a very complex process. in addition to a large number of inseparable factors, there is a constant time constraint as well as the need for a quick response of the entire system. time constraint is one of the biggest problems since it affects, directly or indirectly, different parts of the planning process and the organization of operations. in the process of preparing such operations, time limits the implementation of various expert methods and disables the complete analysis of the environment. the time for decision-making, usually measured in hours, is very brief and it can be even shorter. the short time can make the entire decision-making process even harder. these problems are often expanded by a large number of contradictory, unclear and inseparable pieces of information, which arise in the later stages of the decisionmaking process. the time frame in those situations does not allow a detailed analysis and precise classification. various software systems have been developed for the needs of the entire decision-making process in security forces operations. these systems provide different types of support to the decision-making process. one of such systems is topfas (tamai, 2009) developed especially for the needs of the comprehensive approach to planning the use of nato forces. it contains support for all levels of planning. it enables a detailed and rapid system analysis, support to decision-making and assistance in monitoring the implementation of the decision. 3. the basis of the rough sets imperfect knowledge has always been the subject of study in various fields of science. many approaches to the problem, such as how to understand imperfect knowledge and how to handle it, have been developed. one of the approaches to the problem is the rst. the rough set theory is a mathematical theory presented by the polish scientist zdzisław pawlak at the beginning of the 80’s in the 20th century (pawlak, 1982). this theory has found a number of interesting applications and it is essential for artificial intelligence and cognitive sciences, especially in the areas of machine learning, knowledge acquisition, decision analysis, knowledge discovery from databases, expert systems, inductive reasoning, and pattern recognition. the rough set theory starts from the assumption that each object in the universe (u) is described by some characteristic information. different objects that are described by the same piece of information are considered to be inseparable, i.e. karavidić & projović/decis. mak. appl. manag. eng. 1 (1) (2018) 97-120 100 similar to each other. the indiscernibility relation (i) created in this way represents the mathematical foundation of the rst and in certain sense describes our lack of knowledge about the universe. every rough set contains an appropriate boundary area with objects. these objects cannot be regarded, with any certainty, as belonging to any observed set or its complement. accordingly, it is assumed that a rough set can be represented by a pair of classical sets, which we call its upper and lower approximation. the lower approximation contains objects which certainly belong to the set, while the upper approximation contains objects which possibly belong to the observed set. these two basic operations can be displayed in the following way: upper approximation 𝐈∗(𝐗) = {𝐱 ∈ 𝐔: 𝐈(𝐱) ∩ 𝐗 ≠ 𝟎} and (1) lower approximation 𝐈∗(𝐗) = {𝐱 ∈ 𝐔: 𝐈(𝐱) ⊆ 𝐗}, (2) where x is a subset of u. the difference between the upper and lower approximation is the boundary region of the rough set (figure 1). the specified operation can be displayed as follows: boundary region 𝐁𝐑𝐈(𝐗) = 𝐈 ∗(𝐗) − 𝐈∗(𝐗) (3) figure 1. graph view of the rough set with upper and lower approximation rough sets are defined by approximations. approximations have the following properties: 𝐈∗(𝐗) ⊆ 𝐗 ⊆ 𝐈 ∗(𝐗) (4) 𝐈∗(ø) = 𝐈 ∗(ø) = ø, 𝐈∗(𝐔) = 𝐈 ∗(𝐔) = 𝐔 (5) 𝐈∗(𝐗⋂𝐘) = 𝐈∗(𝐗) ⋂𝐈∗(𝐘) (6) 𝐈∗(𝐗⋃𝐘) ⊇ 𝐈∗(𝐗) ⋃𝐈∗(𝐘) (7) 𝐈∗(𝐗⋂𝐘) ⊆ 𝐈∗(𝐗) ⋂𝐈∗(𝐘) (8) 𝐈∗(𝐗⋃𝐘) = 𝐈∗(𝐗) ⋃𝐈∗(𝐘) (9) if x ⊆ 𝐘, then 𝐈∗(𝐗) ⊆ 𝐈∗(𝐘) and 𝐈 ∗(𝐗) ⊆ 𝐈∗(𝐘) (10) a multi-criteria decision-making (mcdm) model in the security forces operations ... 101 𝐈∗(−𝐗) = −𝐈 ∗(𝐗) (11) 𝐈∗(−𝐗) = −𝐈∗(𝐗) (12) 𝐈∗(𝐈∗(𝐗)) = 𝐈 ∗(𝐈∗(𝐗)) = 𝐈∗(𝐗) (13) 𝐈∗(𝐈∗(𝐗)) = 𝐈∗(𝐈 ∗(𝐗)) = 𝐈∗(𝐗) (14) it is concluded that the upper and the lower approximation were, in a sense, created under the influence of the indiscernibility relation. the pieces of information we have about the objects in the boundary region are often inconsistent or even unclear. when the boundary region is empty (bri = 0), i.e. when the lower and upper approximations match, the case is about crisp (precise) set. the larger the boundary region, the rougher the set becomes. this can be shown by using the accuracy approximation coefficient: 𝛂𝐈(𝐗) = |𝐈∗(𝐗)| / |𝐈 ∗(𝐗)| (15) where |x| is the cardinality of х. for𝛂𝐈(𝐗)=1 the set is precise. for all the values 𝟎 ≤ 𝛂𝐈(𝐗) ≤ 𝟏 the set is rough. therefore, the cardinality of the border region can be used to determine the measure of vagueness, that is, the uncertainty in relation to the observed set (čupić & suknović, 2010). the uncertainty is connected to the elements that belong to the set. because of the above, rough sets can be also defined by the rough membership function. it defines the uncertainty through indiscernibility relation 𝐈: 𝝁𝑿 𝐈 (x) = |𝐗 ∩ 𝐈(𝐱)| / |𝐈(𝐱)| (16) where 𝟎 < 𝝁𝑿 𝐈 (𝐱) < 𝟏. if 𝝁𝑿 𝐈 (𝐱) < 1, the set x is rough due to i for every x ∈ x, in the case 𝝁𝑿 𝐈 (𝐱) = 𝟏, the set is precise. rough membership function has the following properties: 𝝁𝑿 𝐈 (𝐱)= 1, iff x ∈ 𝐈∗(𝐗) (17) 𝝁𝑿 𝐈 (𝐱)= 0, if x ∈ 𝐔 − 𝐈∗(𝐱) (18) 𝟎 < 𝝁𝑿 𝐈 (𝐱) < 𝟏, iff x ∈ 𝐁𝐑𝐈 (𝐗) (19) 𝝁𝑼−𝑿 𝐈 (𝐱)= 1-, if x ∈ 𝝁𝑿 𝐈 (𝐱), for any x ∈ 𝐔 (20) 𝝁𝑼∩𝑿 𝐈 (𝐱) ≤ min (𝝁𝑿 𝐈 (𝐱), 𝝁𝒀 𝐈 (𝐱)), for any x ∈ 𝐔 (21) 𝝁𝑼⋃𝑿 𝐈 (𝐱)≥ max (𝝁𝑿 𝐈 (𝐱), 𝝁𝒀 𝐈 (𝐱)), for any x ∈ 𝐔 (22) generally, the rough membership function represents a coefficient which expresses the uncertainty of element x, where x ∈ 𝐗. the rough membership function can be used to define approximations and the boundary region of a set, as follows: 𝐈∗(𝐗) = {𝐱 ∈ 𝐔: 𝐈𝝁𝑿 𝐈 (𝐱) > 𝟎} (23) 𝐈∗(𝐗) = {𝐱 ∈ 𝐔: 𝐈𝝁𝑿 𝐈 (𝐱) = 𝟏} (24) 𝐁𝐑𝐈(𝐗) = {𝐱 ∈ 𝐔: 𝟎 < 𝝁𝑿 𝐈 (𝐱) < 𝟏} (25) when solving the problem by using the rst, the rules having different decisions for more elements of the same kind can be noticed. these rules are called inconsistent and, when used, they lead to an inability to make the right decision. the problem of inconsistent rules is solved by using consistency factor c. based on the decision rule δ(x), this factor is defined as follows: karavidić & projović/decis. mak. appl. manag. eng. 1 (1) (2018) 97-120 102 c(δ(x)) = { 1, for μx i (x) = 0 or 1 μx i (x), for 0 < μx i (x) < 1 (26) the closer the value of the consistency factor gets to one, the more authentic the rule becomes. should the factor be equal to one, the rule is consistent. in the rough set theory, there is a strict link between vagueness and uncertainty (boričić, 2004). vagueness relates to sets while uncertainty to objects. due to that, approximations are necessary when speaking about vagueness of the set while the rough membership function is necessary when speaking about uncertainty of the given objects’ belonging to the observed set. input data can be quantitative and qualitative. output data represent decisive rules in the form of the statement "if ... then ...", which can be exact or approximate. based on these rules, decisions relating to the observed objects are made. 4. software systems for applying the rough set theory in order to apply the rst to data sets, a large number of software systems, which support rst, has been developed (abbas, 2016). this development can be attributed to the successful application of rough sets to data mining and knowledge discovery. for the purposes of this paper, two applications, namely rosetta and rose2, will be presented. these applications enable the work with the data needed to support the decision-making of the security forces. 4.1. rosetta rosetta was developed by the joint efforts of two groups of researchers from the norwegian university of science and technology and the mathematical institute of the university of warsaw. the project leaders were jan komorovski and andrej skovron (komorowski, 2002). the application design and the graphical user interface were developed by a norwegian group led by alexander ohrn. the rough set algorithms were applied in the software and further developed in the polish group. the rosetta system is a software package based on the concept of rough sets. the system includes a large number of algorithms for discretization and attribute reduction and data classification. it also generates if-then rules and allows data sharing for training, testing and validating of the induced rules and patterns. all these features in used version 1.4.41, are supported by the graphical user interface available for windows systems. the system is widely used in different areas. 4.2. rose2 rose2 is a software system that implements a large number of tools for working with rough sets. the system includes pre-processing (addition of missing values and discretization), approximation of values (determination of upper and lower approximation and boundary regions), calculating the core, attribute reduction, generating decision rules, classification and validation (predki et al., 1998). the basic version of the rose software system has been upgraded several times, adapted to various operating systems, and is now up-to-date as rose2. graphically and visually in the windows environment, this system in used version 2.2, does not seem to be intuitive when presenting solutions like rosetta, but it contains different algorithms that can be applied to the reduction and generation of decision rules. a multi-criteria decision-making (mcdm) model in the security forces operations ... 103 it was developed at the laboratory for intelligent support to decision-making of the institute of computer sciences in poznan, poland. 5. model application based on rough sets in security forces operations the support to the decision-making process in the security forces operations will be included in the proposed model. the phases of the model are as follows: 1) selecting the coa and defining the attribute values, 2) determining the attribute values of the selected coa and forming the decision table, 3) attribute reduction, and 4) generating decision-making rules. the model (figure 2) will be elaborated through the application of two software systems, and the results will be compared and analyzed. figure 2. decision-making support model in security forces operations 5.1. selection of coa and defining the attribute values in order to apply the rst and the proposed model, a source of data on security forces operations is required. the data of coa can be obtained in two ways: (1) from a previously conducted security force operation, and (2) from different simulated operations. the experiences from the conducted security forces operations are a good base for guiding the decision-making process. by analyzing the aforementioned operations, the data that will be used is found. project number 98-98 of the university of defence in belgrade rationalization of the military decision-making process, 2011 is especially significant for the data source. simulations of security forces operations contribute to the checking of selected coas and represent an experienced basis that leads to the improvement of the decision-making process. the university of defence simulation center simulates the operations of the jcats and karavidić & projović/decis. mak. appl. manag. eng. 1 (1) (2018) 97-120 104 janus programs and presents the data source that will be used in this paper. the coa data is entered into the model through the criteria attributes. in the evaluation process, it is necessary to assign certain values to each of these attributes. therefore, it will be necessary to specifically describe or define the values for every attribute. the application of rough sets does not exclusively require quantitative values, and the attributes in this section will be presented in a descriptive or linguistic way (table 1). however, for the needs of a more compact display and later for easier software data processing, the values of the attributes can be replaced by the corresponding numerical or letter substitutions. one of the ways to evaluate attributes is presented in the following text. table 1. overview of the attributes with values in security forces operations attribute description values the strength of our forces (a1) it represents the number of people and units through doctrinal principles for performing various security forces operations 3 – more than needed; 2 adequate (sufficient forces according to the doctrinal principles); 1 insufficient the strength of enemy forces (a2) in terms of the strength and sufficiency of the enemy, the location of the operation is examined. the number and sufficiency of the enemy are viewed through the environment in which the operation is carried out (e.g. the number of enemies in the urban environment or in the classical frontal operation is seen). 3 – very strong forces; 2 – adequate for the planned operation (sufficient forces and strength); 1 weaker enemy forces operations preparing time(a3) a time determination showing the total time available for planning the operation at all levels. in case of decrease the time for planning, harmful consequences can arise because the enemy's action will not be prevented. 3 – sufficient time; 2 limited time, which requires greater and faster approximations; 1 – insufficient time combat environment (a4) it is considered through the prism of organization complexity and the limitation of the use of our various forces in different environments. 3 favorable unpopulated (unlimited use of our forces); 2 usual (poorly populated, the terrain is different); 1 complex (most often urban) our forces casualties (a5) the losses are perceived in accordance with the principles of conducting the operation. 3 – big losses; 2 –average losses; 1 – small losses civilian assessed based on the scope of the 3 – big losses; a multi-criteria decision-making (mcdm) model in the security forces operations ... 105 attribute description values casualties (a6) operation and the complexity of the environment in which the operation is performed. 2 –average losses; 1 – small losses maneuver (a7) skillful use of movement and fire in order to bring our own forces into a more favorable position in relation to the enemy. the success of the maneuver realization greatly contributes to the realization of the operation’s goal. 3 – completely successful; 2 partially successful; 1 unsuccessful combat support (a8) reflected in the sufficiency of combat support resources in different environment. it represents the fire and operational support of our forces that conduct the operation. 2 adequate or sufficient; 1 -inadequate or insufficient protection of our forces (a9) includes various activities that are planned and undertaken in order to reduce the ability of detecting our own forces and preventing or reduce the effects of the enemy's actions. 2 – sufficient; 1 -insufficient sustainability of our forces (a10) for efficiency and autonomy of forces during their use, it combines various activities, measures and procedures of logistical support, personnel and financial security in operations. 2 – favorable; 1 unfavorable simplicity of action (a11) it implies the complexity of the conducted operation. greater complexity in accordance with doctrinal principles leads to a more difficult achievement of the planned goal. it is related to the success of the maneuver. 3 – simple; 2 partly complex; 1 – fully complex morale (a12) it implies the moral-psychological state and the determination to carry out the task. it refers to our forces that participate in the operation, but also to the condition and readiness of civilian structures to accept the consequences of the operation. the extraordinary significance of the moral aspect is manifested in unforeseen situations when it can bring a dominance over the enemy. 3 – favorable; 2 partly favorable; 1 unfavorable intelligence system (a13) collecting, processing and using intelligence data is inseparably linked to the success of the operation. quality work of the services will contribute to more precise data and reduce the uncertainty in the decision-making process 2 – adequate; 1 -inadequate command and control – c2 (a14) it implies the expertise and experience of persons who manage the operation, their organization, operability, efficiency and elasticity in conducting the operation. it is 3 – high level; 2 – adequate; 1 insufficient karavidić & projović/decis. mak. appl. manag. eng. 1 (1) (2018) 97-120 106 attribute description values related to the speed of information transmission and timely response to emerging situations. coordination with civil structures (a15) cooperation with civil administration authorities in the operations zone 2 – adequate; 1 inadequate decision attribute success of the operation (d) the result of the operation 2 – successful with minor or greater losses 1 unsuccessful 5.2. assigning values to the attributes of the selected coa and forming a decision table the decision table is a data table that distinguishes two attribute classes condition attributes (a1, a2 ... a15) and decision (action) attributes (d).table 2 shows the overview of the coa and attributes. in each row, one coa is described, and in each column, one attribute is described. the records in the table are the values of the attribute. attribute values can be expressed linguistically, but due to a more compact display, they will be replaced by numerical substitutions. in this way, each row can provide a piece of information on a particular coa in the operation. table 2. decision-making table coa a 1 a 2 a 3 a 4 a 5 a 6 a 7 a 8 a 9 a 10 a 11 a 12 a 13 a 14 a 15 d 1. 3 3 3 3 2 2 3 2 2 2 3 2 2 3 2 2 2. 3 1 2 1 2 3 2 2 2 2 2 2 1 2 1 2 3. 1 2 1 1 3 3 1 2 1 1 2 2 2 2 2 1 4. 2 2 2 2 2 2 1 2 2 2 1 1 1 1 1 1 5. 3 2 2 2 2 2 3 2 1 2 2 2 2 3 1 2 6. 3 3 2 2 1 1 3 2 2 2 2 2 2 3 2 2 7. 1 2 1 2 2 1 1 2 1 2 2 2 2 2 2 1 8. 3 1 3 1 2 3 3 2 2 2 3 2 2 3 1 2 9. 1 2 1 3 2 2 1 1 1 1 3 1 1 1 1 1 10. 3 1 3 1 2 2 2 2 2 2 2 2 2 3 2 2 11. 2 2 3 2 3 2 3 2 2 2 2 2 2 2 1 2 12. 2 1 2 1 2 2 3 1 2 1 2 2 2 3 1 2 13. 3 1 3 2 1 2 3 2 2 2 2 2 2 2 2 2 14. 2 3 1 2 1 2 2 2 2 2 2 2 2 2 2 1 15. 3 1 2 2 1 2 3 2 1 2 2 2 2 3 1 2 16. 3 2 3 1 2 3 2 2 2 2 2 2 1 1 1 2 17. 3 3 3 2 3 2 3 2 2 2 2 2 2 2 2 2 18. 2 2 1 3 1 2 1 2 2 2 3 2 2 2 2 1 19. 1 2 1 2 3 2 1 2 2 2 3 2 2 2 2 1 20. 3 1 3 3 2 2 3 2 1 2 3 2 2 1 2 2 21. 2 2 1 2 2 2 2 2 2 1 1 2 2 3 2 1 22. 2 2 1 3 2 1 1 1 2 2 2 2 2 2 1 1 a multi-criteria decision-making (mcdm) model in the security forces operations ... 107 coa a 1 a 2 a 3 a 4 a 5 a 6 a 7 a 8 a 9 a 10 a 11 a 12 a 13 a 14 a 15 d 23. 2 2 3 2 2 2 3 2 2 1 1 2 2 3 2 2 24. 2 2 3 2 1 1 3 1 2 2 2 2 2 2 1 2 25. 1 1 3 2 1 1 3 1 2 2 2 2 2 2 1 2 26. 2 3 1 2 1 2 2 2 2 2 2 2 2 2 2 1 27. 3 1 2 2 1 2 3 2 1 2 2 2 2 3 1 2 28. 3 2 3 1 2 3 2 2 2 2 2 2 1 1 1 2 29. 3 3 3 2 3 2 3 2 2 2 2 2 2 2 2 2 30. 2 2 1 3 1 2 1 2 2 2 3 2 2 2 2 1 31. 1 2 1 2 3 2 1 2 2 2 3 2 2 2 2 1 32. 3 1 3 3 2 2 3 2 1 2 3 2 2 1 2 2 33. 2 2 1 2 2 2 2 2 2 1 1 2 2 3 2 1 34. 2 2 1 3 2 1 1 1 2 2 2 2 2 2 1 1 35. 2 2 3 2 2 2 3 2 2 1 1 2 2 3 2 2 36. 2 2 3 2 1 1 3 1 2 2 2 2 2 2 1 2 37. 1 1 3 2 1 1 3 1 2 2 2 2 2 2 1 1 in figures 3 and 4 screen review decision table in software systems rosetta and rose2 can be seen. in software system rosetta, the names of the attributes are given linguistically, while in rose2 they are written in symbols. figure 3. decision table in software system rosetta figure 4. decision table in software system rose2 karavidić & projović/decis. mak. appl. manag. eng. 1 (1) (2018) 97-120 108 the software system rose2 can be used for determining the upper and lower approximation of the sets ''coa that was successful (i.e. the security forces operation is successful according to the selected coa)'' and sets ''coa that was unsuccessful (i.e. security forces operation was unsuccessful according to the selected coa'' (figure 5). the software system displays the number of objects by the decision attribute, upper and lower approximation and accuracy approximation coefficient. the software system rosetta does not have such possibilities. figure 5. determining the upper and lower approximation and the accuracy approximation coefficient in the software system rose2 accuracy approximation coefficient αi(x) in the software system rose2 is shown as accuracy. it can be seen that α (successful) = 0,875 and α (unsuccessful)= 0,9130. based on equations (15), in case of αi(x) → 1through equations (3) bri(x) → 0is obtained. it means that the upper and lower approximations are approaching each other. for αi(x) = 1 follows bri(x) = 0. the above implies that the combinations of attributes in coa are unique, i.e. there are no identical condition attributes for different decision attributes. in that case, the set is crisp. by increasing the number of coa, the given sets would increase their degree of vagueness. the set would become more "rough". then, the coefficient of approximation accuracy would be smaller, and the available knowledge would be more difficult to classify, but this would not affect the capabilities of these software systems. the work with reduced consistency of the rules is a fundamental advantage of the rst when working with incomplete and unspecified data. 5.3. attribute reduction the next step is to assemble a minimal subset of independent attributes, i.e. reductions. these reductions guarantee the quality of classifications as a whole set. output data form the attribute core. reduction of attributes implies a decrease in volume of the core or the number of all attributes that influence the decision-making process. the aim is to identify those attributes, which according to the requirements of the decision-maker, significantly influence the decision-making process. attribute reduction is used only in the case when it does not disturb the quality of the approximation. finding the reductor will be perceived through the rosetta and rose2 software systems by using the most important reduction algorithms offered. a multi-criteria decision-making (mcdm) model in the security forces operations ... 109 5.3.1. attribute reduction with software system rosetta rosetta offers more various reductors or reduction algorithms which can be applied to data. one part of the reductors is implemented as a variant of the original form of the algorithm, and the other as customized and perfected reductors regarding existing algorithms for application in the software system. perfected reductors for applying the rst are developed for the needs of the rosetta software system and they have the prefix rses. johnson reducer is a variant of the simple ''greedy'' algorithm (johnson’s algorithm) used for calculating only one shorter reduction. the algorithm tends to find the main implication of a minimum length (johnson, 1974). it always selects the most frequent attribute in the decision-making function or a row of decision-making matrices and it continues until the reducts are obtained. this algorithm considers the attribute that most often appears as the most significant one. even though this is not true in all cases, an optimal solution is usually found (abbas, 2016). the result of the application on the decision-making table is shown in figure 6. figure 6. rosetta reduction johnson reducer rses exhaustive reducer calculates the reductions by the principle of rough computer power without approximations, comparing all the given combinations of attributes with one another. the output gives more reductions that significantly affect the decision attribute (dobrilovic et al., 2012; romański, 1988). the result of the application on the decision-making table is shown in figure 7. karavidić & projović/decis. mak. appl. manag. eng. 1 (1) (2018) 97-120 110 figure 7. rosetta reduction rses exhaustive reducer rses johnson reducer is an advanced version of a simple johnson algorithm adapted to the rosetta software system (li, 2014). the result of the application on the decision-making table is shown in figure 8. figure 8. rosetta reduction rses johnson reducer rses genetic reducer implements a variant of the genetic algorithm (jaddi & abdullah, 2013; wroblewski, 1995) to search for reductions until the search area is exhausted, i.e. until the maximum number of reductions is noticed. as the aforementioned, the reducer is adapted to the rosetta software system and it provides various options for selecting the parameters depending on the search speed requirements and the coverage of the reduction. the result of the application on the decision-making table is shown in figure 9. a multi-criteria decision-making (mcdm) model in the security forces operations ... 111 figure 9. rosetta reduction rses genetic reducer 5.3.2. attribute reduction with software system rose2 the rose2 software system also offers multiple reducers based on different algorithms. the lattice search reducer attempts to reduce search space by extracting a part that has no potential to include reduction of including a reduct (grabowski, 2016; prędki & wilk, 1999). the result of the application on the decision-making table is shown in figure 10. figure 10. rose2reduction lattice search discernibility matrix reductor is a more computer-efficient algorithm for generating reductions based on an open matrix (skowron & rauszer, 1992). the result of the application on the decision-making table is shown in figure 11. karavidić & projović/decis. mak. appl. manag. eng. 1 (1) (2018) 97-120 112 figure 11. rose2reduction discernibility matrix heuristic search reducer implements a strategy based on adding attributes to the core. it determines approximately the reduction value when it is not possible to accurately determine other algorithms. because of this characteristic, heuristic search reducer is significant when other methods fail (liang et al., 2014).the result of the application on the decision-making table is shown in figure 12. figure 12. rose2reduction heuristic search 5.3.3. review of reduced attributes it is important to highlight that, due to the essence of the decision-making support process, finding a shorter coordinated core is of crucial importance. such a need arises from the demand that the time for considering attribute conditions be a multi-criteria decision-making (mcdm) model in the security forces operations ... 113 shortened because analyzing every additional attribute takes additional time and that is always a limiting factor. some reducers offer only shorter cores while others offer cores of different lengths sorted by the quality of reduction. because of this, only the cores of the shortest length (in our case, two attributes) and the highest quality reduction will be considered. comparison of various components of rosetta and rose2 software systems and attributes obtained by reduction are given in table 3. table 3. results of attributes’ reduction software system reductor attributes obtained by reduction 1. reduction 2. reduction rosetta johnson reducer а1, а3 rses exhaustive reducer а1, а3 а1, а7 rses johnson reducer а3, а1 rses genetic reducer а1, а3 а1, а7 rose2 lattice search а1,а7 а1, а3 discernibility matrix а1, а7 а1, а3 heruistic search а1, а3 а1, а7 it can be seen from the previous table that different reducers give very similar results. the mild differences are the result of the applied algorithms, their way of attribute reduction and limitations in the reduction process, but also of the number of coas being considered. with the increase in the number of coas, it is expected that there would be equalization of different algorithm reduction results. looking at the results of all the obtained reductions, it can be concluded that there is no unique combination of two attributes around which the offered algorithms are completely "compatible". the most compatible attribute combination is a1 and a3 (the strength of our forces and operations preparing time). however, it is noticeable that three attributes are repeated in the results of all reducers both on the first and the second reduction. therefore, the final reduction cannot be performed by using the shortest combination of two attributes. instead, three attributes will be used: а1, а3, а7 that is, the strength of our forces, operations preparing time and maneuver. these attributes essentially represent the core of the attributes required for decision-making. other attributes are rejected because their values will not have a significant effect on classifying coa and generating the decision-making rules. 5.4. generating decision-making rules the obtained attributes are sufficient to form a reduced decision-making table. the rosetta software system allows the consideration of the harmonized reduced decision-making table through the manual reducer and generating decision-making rules for the specified attributes (figure 13). also, each of the aforementioned reducers generates its decision-making tables. however, the above will be used due to a more comprehensive view of the selected condition attributes. karavidić & projović/decis. mak. appl. manag. eng. 1 (1) (2018) 97-120 114 figure 13. rosetta –reduced decision-making table it can be noticed that besides generating complete decision-making algorithms, the rosetta software system also generates various other data related to certain probability properties. the most important characteristics for observation and further consideration of the attributes are support, strength, certainty and coverage (pawlak, 2002). these factors have various names in different software systems, and therefore, direct translations can be diverse, but for the purposes of this work, previously given property names will be kept. the support factor represents the number of coa with all identical attributes. in figure 13, it is presented as the rhs support. the software system also offers the lhs support feature, which represents the number of coas with equal attribute conditions. this is less important for further consideration. by reducing the consistency of the decision-making rules, differences between the two properties indicated would be made. the strength factor represents the participation of the coa determined by the observed attributes in the total number of the monitored coas and the sum of all must be 100%. basically, it represents the support factor in percentages compared to the total number of coas considered. it gives an important indicator of the coa towards which should be strived. it is a significant statistic prediction indicator if the values are higher. the strength factor is most often calculated from data, but it can be also obtained by estimation (pawlak, 2002). it is obtained by estimation when an expert in a particular field estimates that the appropriate combination of attributes in coa is more significant than a simple percentage participation in the sum of all coas. in figure 13, it is presented in the lhs coverage column and it is derived from the existing table data. the certainty factor is at a high level, due to different combinations of condition attributes in a reduced decision-making table. this feature represents practically the participation of the support factor of a particular condition attribute combination in the total support of that condition attribute combination. it gives knowledge on certainty of the observed coа. the value will decrease if there are identical condition attributes with different decision attributes. because of its importance, this property of probability leads us to consider the coas that have a higher value of certainty, i.e. closer to the number 1.00. in this sense, the certainty factor can be identified with the previously defined consistency factor c (δ (h)) and it should be the first property and the most important factor to be considered in the analysis of the further offered algorithms. the coa with a smaller consistency factor will further focus a multi-criteria decision-making (mcdm) model in the security forces operations ... 115 consideration of the generated rules. this property is shown in figure 13 in the rhs accuracy column. the coverage factor provides significant information about the participation of a particular value of the decision attribute. it implies percentage of one attribute combination in the given decision attribute. the sum of all factor values must be 100% by one value of the decision attribute. it is particularly emphasized in considering a single decision attribute in a large number of coas. this is shown in figure 13 in rhs coverage column. further reduced decision-making table can be presented by the following decision-making algorithms and prominent probability properties (table 4). the generated decision rules for c(δ(х))=1 were taken into account. table 4. rosetta -decision-making algorithms for c(δ(х))=1 if then strength factor (%) coverage factor (%) condition attributes decision attribute strength of our forces operations preparing time maneuver success of the operation more than needed sufficient completely successful successful 18,9 31,8 more than needed limited partially successful successful 2,7 4,5 insufficient insufficient unsuccessful unsuccessful 13,5 33,3 adequate limited unsuccessful unsuccessful 2,7 6,6 more than needed limited completely successful successful 13,5 22,7 more than needed sufficient partially successful successful 8,1 13,6 adequate sufficient completely successful successful 13,5 22,7 adequate insufficient partially successful unsuccessful 10,8 26,6 adequate insufficient unsuccessful unsuccessful 10,8 26,6 the mentioned prominent properties of probability in the decision-making algorithm are directed to the specific if-then rules, which, due to the above properties, further emphasize their significance. the rose2 software system offers a different approach to generating decision rules. it uses a modified lem2 (modlem) algorithm that recognizes extreme differences in rules and separates the most positive and most negative attributes from the impact on decision attributes. all the offered variants of this algorithm have a "greedy" approach and give short decision-making rules. for the purposes of this paper, the rule generator will be considered with the extended minimum coverage as can be seen in figure 14. karavidić & projović/decis. mak. appl. manag. eng. 1 (1) (2018) 97-120 116 figure 14. rose2–decision rules the obtained data can be used. however, due to the lack of a certain number of probabilities and the combination of condition attributes, they are less important in the further decision-making process than the results of the rosetta program. they represent a shortened lead with the described coverage factor, which should be sought in the further decision-making process. the software system also directs to the decision rules with consistency factor c (δ (h)) = 1. in accordance with the possibilities of the rst, it shows the rules for which c (δ (h)) <1, but do not give a precise value. those rules are called approximate rules. the decision-making algorithms, derived from the software system rose2 for c(δ(х)) = 1, are presented in table 5. table 5. rose2 -decision-making algorithms for c(δ(х))=1 if then coverage factor (%) condition attributes decision attribute strength of our forces operations preparing time maneuver success of the operation insufficient unsuccessful 86,6 unsuccessful unsuccessful 66,6 more than needed or adequate sufficient or limited completely or partially successful successful 95 rose2 directs with the coverage factor. in this way, the shorter coverage of the rules in percentages as the only property of the probability of the given rule, as given by the rose2 software system, is not sufficiently strong to lead to the desired decision attribute. however, even this coverage of the rule can be significant in the a multi-criteria decision-making (mcdm) model in the security forces operations ... 117 decision-making process where it gives certain knowledge about processes that are shaped and at least partially directs the decision-makers. 6. discussion of results the obtained attribute core is essential for the success of the operation, but also for other condition attributes. the influence of the attribute core on the success of the operation can be considered through other condition attributes (us army, 2015). for example, operations preparing time (core attribute а3) affects the quality of planning all elements of the operation. it also affects combat support (а8), protection of our forces (а9) and sustainability of our forces (а10). time also has an effect on all activities that completely or partially precede performing of the operation. some of those activities are intelligence system (а13) and coordination with civil structures (а15). within the sufficient time frame, shortcomings in command and control c2 (a14) can be compensated. additionally, our forces casualties (a5) can be reduced through greater preparation of the protection of our forces (a9). similarly, other core attributes dominantly affect other condition attributes. the strength of our forces (core attribute a1) can compensate for different negativities in other attributes. on the other hand, there is a certain feedback between all attributes. moreover, there is a mutual influence which is impossible to fully comprehend due to the stated complexity of the environment. such feedback is also present between the core attributes, but less significant than with other attributes. an example for that is the influence of operations preparing time (core attribute а3) on maneuver (core attribute а7). in practice it can have a positive influence, but not necessarily. by using this decision-making support model, the complexity of the mutual influence of all condition attributes can be partially overcome. this is one of its biggest advantages. the obtained decision algorithm, especially the one from the rosetta software system, directs and manages the authorities that plan the coa of the security forces operations to the rules that bring success in operations in a complex environment (gordic et al., 2013). they also provide information on combinations of attributes that will lead to unsuccessful operation. guided by these rules in different situations, time spent on certain options in entire planning and decision-making process is reduced. it is a necessary time-saving. the application of the decision-making support system based on the rst enables an additional source of information to the decision-maker and the persons who take part in the entire decision-making process. thus, the purpose of such a system is fulfilled. 7. conclusion the rst in the decision-making support model uses entirely internal knowledge, unlike other methods whose application requires additional assumption models or some form of preprocessing. the internal knowledge represents the existing operational data, and there is no need to rely on modeling assumptions. the advantage of the decision-making model based on the rst in the decisionmaking process is the ability to use qualitative-quantitative data, as well as the ifthen decision-making algorithms. these algorithms can be applied to the whole decision-making process by directing the decision-maker in every moment of the process, and not just at the moment of selecting a coa. karavidić & projović/decis. mak. appl. manag. eng. 1 (1) (2018) 97-120 118 using the proposed decision-making support model makes it possible to reach extremely valuable indicators in a rather simple way, which can help in the decisionmaking process. the paper presents one method of use; however, due to the complexity of the environment in which security forces operations are planned and implemented, it is possible to apply the rough set concept to lower levels the sublevels of these attributes. simultaneous application of the rough set concept to lower and higher levels of attributes in security forces operations, complemented by classifying and/or clustering at lower levels, can be a challenge for future work. in this way, the support for decision-making in security forces operations in the modern security environment would be raised to a higher level. acknowledgements the work reported on in this paper is a part of the investigation in the research projects va-dh/2/18-20 supported by the university of defence in belgrade and muo-in supported by the university of defence in belgrade, ministry of defence, republic of serbia and ministry of education, science and technological development, republic of serbia. this support is gratefully acknowledged. references abbas, z., & burney, a. (2016). a survey of software packages used for rough set analysis. journal of computer and communications, 4 (9), 10-18. boričić, b. r., & konjikušić, s. (2004). logika preferencija na grubim i rasplinutim skupovima. economic annals, 44 (160), 131-146. božanić, d. i., pamučar, d. s., & karović, s. m. (2016). use of the fuzzy ahp-mabac hybrid model in ranking potential locations for preparing laying-up positions. military technical courier, 64 (3), 705-729. burney, a., & abbas, z. (2015). applications of rough sets in health sciences and disease diagnosis. recent researches in applied computer science, 8 (3), 153-161. chen, h., li, t., luo, c., horng, s. j., & wang, g. (2015). a decision-theoretic rough set approach for dynamic data mining. ieee transactions on fuzzy systems, 23 (6), 1958-1970. chowdhary, c. l., & acharjya, d. p. (2016). a hybrid scheme for breast cancer detection using intuitionistic fuzzy rough set technique. international journal of healthcare information systems and informatics (ijhisi), 11 (2), 38-61. čupić, m., & suknović, m. (2010). teorija odlučivanja, beograd, fon, 227-236. deshpande, m., & bajaj, p. (2017). performance improvement of traffic flow prediction model using combination of support vector machine and rough set. international journal of computer applications, 163 (2), 31-35. dobrilovic, d., brtka, v., berkovic, i., & odadzic, b. (2012). evaluation of the virtual network laboratory exercises using a method based on the rough set theory. computer applications in engineering education, 20(1), 29-37. a multi-criteria decision-making (mcdm) model in the security forces operations ... 119 durairaj, m., & sathyavathi, t. (2013). applying rough set theory for medical informatics data analysis. isroset-international journal of scientific research in computer science and engineering, 1, 1-8. gigović, l., pamučar, d., bajić, z., & milićević, m. (2016). the combination of expert judgment and gis-mairca analysis for the selection of sites for ammunition depots. sustainability, 8(4), 372, 1-25. gordic, m., slavkovic, r., & talijan, m. (2013). a conceptual model of the state security system using the modal experiment. “carol i” national defence university publishing house, 48(3), 58-67. grabowski, a. (2016). lattice theory for rough sets–a case study with mizar. fundamenta informaticae, 147 (2-3), 223-240. greco, s., matarazzo, b., & slowinski, r. (2002). rough sets methodology for sorting problems in presence of multiple attributes and criteria. european journal of operational research, 138 (2), 247-259. jaddi, n. s., & abdullah, s. (2013). hybrid of genetic algorithm and great deluge algorithm for rough set attribute reduction. turkish journal of electrical engineering & computer sciences, 21 (6), 1737-1750. ji, z., sun, q., xia, y., chen, q., xia, d., & feng, d. (2012). generalized rough fuzzy cmeans algorithm for brain mr image segmentation. computer methods and programs in biomedicine, 108 (2), 644-655. jia, x., shang, l., ji, y., & li, w. (2007). an incremental updating algorithm for core computing in dominance-based rough set model. in international workshop on rough sets, fuzzy sets, data mining, and granular-soft computing. springer berlin heidelberg. jiang, f., zhou, k., deng, h., li, x., & zhong, y. (2009). an optimized model for blasting parameters in underground mines' deep-hole caving based on rough set and artificial neural network. in computational intelligence and design, 2009. iscid'09. second international symposium on ieee, 1, 459-462. johnson, d. s. (1974). approximation algorithms for combinatorial problems. journal of computer and system sciences, 9(3), 256-278. komorowski, j., øhrn, a. and skowron, a. (2002) case studies: public domain, multiple mining tasks systems: rosetta rough sets. in: zyt, j., klosgen, w. and zytkow, j.m., eds., handbook of data mining and knowledge discovery, oxford university press inc., oxford, 554-559. kuburić, m., ćirović, g., & kapović, z. (2012). estimation of bridges through implementation of rough sets theory. technical gazette, 19(4), 781-793. li, x. (2014). attribute selection methods in rough set theory. master's projects 352. san josé state university. liang, j., wang, f., dang, c., & qian, y. (2014). a group incremental approach to feature selection applying rough set technique. ieee transactions on knowledge and data engineering, 26 (2), 294-308. karavidić & projović/decis. mak. appl. manag. eng. 1 (1) (2018) 97-120 120 pamučar, d., božanić, d., & đorović, b. (2011). modelling of the fuzzy logical system for offering support in making decisions within the engineering units of the serbian army. international journal of physical sciences, 6 (3), 592-609. pawlak, z. (1982). rough sets. international journal of parallel programming, 11 (5), 341-356. pawlak, z. (2002). rough sets and intelligent data analysis. information sciences, 147 (1), 1-12. prędki, b. and wilk, s. (1999) rough set based data exploration using rose system. 11th international symposium of foundations of intelligent systems, warsaw, 8-11, 172-180. predki, b., słowiński, r., stefanowski, j., susmaga, r., & wilk, s. (1998). rosesoftware implementation of the rough set theory. in international conference on rough sets and current trends in computing. springer, berlin, heidelberg. romański, s. (1988). operations on families of sets for exhaustive search, given a monotonic function. in proceedings of the third international conference on data and knowledge bases: improving usability and responsiveness, 310-322. shen, k. y., sakai, h., & tzeng, g. h. (2017). stable rules evaluation for a rough-setbased bipolar model: a preliminary study for credit loan evaluation. in international joint conference on rough sets. springer, cham. shen, l., & chen, s. (2013). research of customer classification based on rough set using rosetta software. in proceedings of the 2012 international conference on communication, electronics and automation engineering. springer berlin heidelberg. skowron, a., & rauszer c. (1992). the discernibility matrices and functions in information systems in: slowinski r. intelligent decision support. handbook of applications and advances of the rough sets theory. kluwer academic publishers. slavkovic, r., talijan, m., & jelic, m. (2012). operatics in the system of defence sciences (military sciences). “carol i” national defence university publishing house, 45(4), 88-100. slavkovic, r., talijan, m., & jelic, m. (2013). relationship between theory and doctrine of operational art. security and defence quarterly, 1(1), 54-75. stokić, e., brtka, v., & srdić, b. (2010). the synthesis of the rough set model for the better applicability of sagittal abdominal diameter in identifying high risk patients. computers in biology and medicine, 40 (9), 786-790. suknović, m., & delibašić, b. (2010). poslovna inteligencija i sistemi za podršku odlučivanju. fon, beograd. tamai, s. (2009). tools for operational planning functional area service: what is this. nrdc-ita magazine. us army. (2015). fm 6-0 commander and staff organization and operations, washington. wroblewski, j. (1995). finding minimal reducts using genetic algorithms. in proccedings of the second annual join conference on infromation science, 2, 186-189. plane thermoelastic waves in infinite half-space caused decision making: applications in management and engineering vol. 3, issue 2, 2020, pp. 97-118 issn: 2560-6018 eissn: 2620-0104 doi:_ https://doi.org/10.31181/dmame2003097v * corresponding author. e-mail addresses: m.j.vilela-ibarra@rgu.ac.uk (m. vilela), g.f.oluyemi@rgu.ac.uk (g. oluyemi), a.petrovski@rgu.ac.uk (a. petrovski) a holistic approach to assessment of value of information (voi) with fuzzy data and decision criteria martin vilela*1, gbenga oluyemi 1 and andrei petrovski 2 1 school of engineering, robert gordon university, aberdeen, united kingdom 2 school of computing, robert gordon university, aberdeen, united kingdom received: 15 august 2020; accepted: 28 september 2020; available online: 30 september 2020. original scientific paper abstract: classical decision and value of information theories have been applied in the oil and gas industry from the 1960s with partial success. in this research, we identify that the classical theory of value of information has weaknesses related with optimal data acquisition selection, data fuzziness and fuzzy decision criteria and we propose a modification in the theory to fill the gaps found. the research presented in this paper integrates theories and techniques from statistical analysis and artificial intelligence to develop a more coherent, robust and complete methodology for assessing the value of acquiring new information in the context of the oil and gas industry. the proposed methodology is applied to a case study describing a value of information assessment in an oil field where two alternatives for data acquisition are discussed. it is shown that: i) the technique of design of experiments provides a full identification of the input parameters affecting the value of the project and allows a proper selection of the data acquisition actions, ii) when the fuzziness of the data is included in the assessment, the value of the data decreases compared with the case where data are assumed to be crisp; this result means that the decision concerning the value of acquiring new data depends on whether the fuzzy nature of the data is included in the assessment and on the difference between the project value with and without data acquisition, iii) the fuzzy inference system developed for this case study successfully follows the logic of the decision-maker and results in a straightforward system to aggregate decision criteria. key words: value of information, fuzzy logic, design of experiments, uncertainty, decision making. mailto:m.j.vilela-ibarra@rgu.ac.uk mailto:g.f.oluyemi@rgu.ac.uk mailto:a.petrovski@rgu.ac.uk vilela et al./decis. mak. appl. manag. eng. 3 (2) (2020) 97-118 98 1. introduction the classical methodology for the value of information (voi) assessment has been used in the oil and gas industry since the 1960s, even though it is only recently that more applications have been published. it is commonly acknowledged that, due to a large number of data acquisition actions and the capital investment associated with it, the oil and gas industry is an ideal domain for developing and applying the voi assessments. the current methodology for the voi has several weaknesses for its applicability in oil and gas projects, and the objective of this research is to present a complete theory for voi that overcomes those weaknesses. the weaknesses found in the current voi theory are the following: 1) typically, the classical approach for voi assessment is carried out when it has been identified that the value of the project depends on an uncertain input variable that may be better defined if a specific piece of data is acquired. this approach lacks a complete analysis of the project uncertainties and the impact that the different inputs and their interactions have on the project’s value. this procedure to assess the value of acquiring data can limit the opportunities to improve the project’s value. 2) the classical approach to voi does not provide an integrated assessment of the impact that a specific data-gathering activity may have on the uncertainty of more than one variable. 3) voi does not consider that the data to be acquired may carry uncertainties that are due not only to randomness but also to fuzziness. 4) although the utility value is a well-known concept, most of the times, it is not used in voi assessments. 5) the criteria used by decision-makers for making decisions (e.g. to reject a project or to accept a data acquisition proposal) are fuzzy. however, the results from the classical voi assessment are crisp numbers; the handling of this dichotomy requires different tools from the ones used in the classical approach for voi. the aim of this research is to address the gaps identified in the classical methodology for the voi by integrating three existing techniques from other domains. firstly, the research identifies that the design of experiments (doe) approach can be used in the voi for providing a holistic assessment of the complete set of uncertain parameters, selecting the ones that have the most impact on the value of the project, and supporting the selection of the data acquisition actions for evaluation. secondly, the fuzziness of the data is captured through membership functions, and the expected utility value of each financial parameter is estimated using the probability of the states conditioned to the membership functions (in the classical methodology, this is conditioned to crisp values of the data). thirdly, a fuzzy inference system is developed for making the voi evaluation, with the human decision-making logic integrated into the assessment process, and several financial parameters aggregated into one. a case study, taken from the oil and gas industry, is discussed to show a successful application of the proposed methodology. 2. literature review value of information is a theory for deciding whether it is worthwhile to acquire information in the frame of a project’s value; this will happen when new data is used a holistic approach to assessment of value of information (voi) with fuzzy data and decision... 99 to change a decision that would be made differently without that information and when the value of the project increases after data is acquired. voi theory was developed by schlaifer (1959) and later developed further by grayson (1960), raiffa and schlaifer (1961), newendorp (1967) and raiffa (1968) in the context of business administration. one of the first references of voi in the oil and gas industry is grayson’s (1960) application of voi to uncertain drilling decisions. newendorp (1967) discusses a voi problem including the risk attitude of the decision-maker described through the use of the exponential utility function; this same author (newendorp, 1972) reviews in great detail the bayes’ theorem and its application for voi assessment. a series of works from several authors in the oil and gas industry shows an increasing interest in using voi as a tool for making decisions. dougherty (1971) shows several straightforward applications of voi for the oil and gas industry. warren (1983) discusses the case study of a field development decision regarding initiating, rejecting or postponing a project decision until more information is gathered; lohrenz (1988) reviews four examples of the value of data acquisition using decision trees; silbergh and brons (1972) debate several methods of project valuation, utility functions, and voi. moras, lesso, and macdonald (1987) show the value associated with different numbers of observation wells to monitor underground gas storage. gerhardt and haldorsen (1989) show several applications of voi for typical examples of decisions in subsurface problems; dunn (1992) discusses the voi of well logs while stibolt and lehman (1993) do the same for seismic data. demirmen (1996) broadens the use of voi by using it in the two types of appraisal activities: screening, and optimization; this is one of the first references that discuss the use of voi on a complete oil and gas project and, open the possibility to use this tool as a means for ranking subsurface appraisal activities. koninx (2000) reviews voi from a methodological perspective and discuss important criteria that should be taken into consideration when data is proposed to be acquired such as the value of assurance and value of creation; bratvold, bickel, and lohne (2007) show how to make a voi assessment and discuss a statistical review of the published work about voi which indicates that it is still far from being a standard application in the oil and gas industry and conclude with identification and discussions of the possible causes of the limited use of voi in the oil and gas industry. new insight on voi is shown by kullawan, bratvold, and bickel (2014) by applying voi to real-time geosteering operations and by vilela, oluyemi and petrovski (2018, 2019a) by introducing the fuzzy nature of the data in the voi assessment. in the previous works, voi was applied on “isolated” data-gathering activities related to one of the project uncertainties, by assessing the impact that acquiring such data had on the value of the project; however, from a project standpoint, the essential objective is the identification and quantification of the benefits that are likely to come from any possible data acquisition activities that maximize the project value and not just one of the possible data acquisition activities, without considering the uncertainties in the complete project. the identification and definition of the data acquisition activities that maximize the project’s value can be made using the technique of doe. uncertainty can be aleatoric (related with noise inherent in the observations; it is unavoidable) and epistemic (related with models used to mimic the reality; it is feasible to be reduced by additional data acquisition). in problems characterized by epistemic uncertainty in the input and/or output variables, it is important to know vilela et al./decis. mak. appl. manag. eng. 3 (2) (2020) 97-118 100 what are the ranges of variability and the relative importance that each of the input variables has on the range of variability and values of the output variables. design of experiments is a structured and organized methodology to conduct and analyze experiments by defining each one by a specific set of values for the input parameters; the experiments (or simulation runs) should be performed to assess the impact that input parameters and their interactions have on the output variables (montgomery, 2005). doe has been used for improving the performance of processes and reducing result variability and cost (telford, 2007). doe is used to understand a system or process by means of experimentation; figure 1 shows that the input parameters, combined by the system or process under consideration, are affected by factors (controllable and uncontrollable) which produce the output parameters. figure 1. diagram of the design of experiments approach doe was invented by statistician ronald fisher (1935) to understand the factors involved in increasing the crop yield in the uk, and its foundations were completed thanks to the work of box and wilson (1951), box, hunter and hunter (1978) and box and draper (1987). law and kelton (1991) and myers and montgomery (2002) develop doe methods for simulations proposes; doe has expanded its applications to several domains such as the chemical industry (yang, bi, and mao, 2002; sjoblom et al., 2005; ruotolo and gubulin, 2005), materials (suffield, dillman, and haworth, 2004; liao, 2003; hoipkemeier-wilson et al., 2004), industrial engineering (tong, kwong, and yu, 2004; galantucci, percoco, and spina, 2003; du et al., 2002), electronic (ogle and hornberger, 2001) or mechanical engineering (passmore, patel and lorentzen, 2001; nataraj, arunachalam, and dhandapani, 2005; farhang-mehr and azann, 2005; cervantes and engstrom, 2004), aerospace (zang and green, 1999) and the analysis and optimization of nonlinear systems (sacks et al., 1989). computational deterministic experimentation (e.g. used for dynamic reservoir simulation) differs from real-world experimentation in the fact that the former does not have a random error as the latter has; in practical terms, that means we always get the same output using a specific set of input parameters. similar to real-world experimentation, the objective of simulation experimentation is to determine the a holistic approach to assessment of value of information (voi) with fuzzy data and decision... 101 factors that have a large impact on the response, getting the results with the least number of simulation runs (law, 2015). the first applications of doe in the oil and gas industry were by damsleth, hage, and volden (1992), egeland et al. (1992) and larsen, kristoffersen, and egeland (1994); after those applications, doe has been used for identifying the main geological parameters responsible for oil recovery (white et al., 2001); for uncertainty integration to quantify their impact on original oil in place, recoverable reserves and production profiles (corre, de feraudy and vincent, 2000); for assessing uncertainties in production profiles (venkataraman, 2000); for investigating the impact of geologic heterogeneities and uncertainties in different development schemes (wang and white, 2002); and for defining the minimum number of reservoir simulation runs needed to identify and quantify the factors responsible for the uncertainties of the reservoir performance (peake, abadah and skander, 2005). additionally, studies on production forecasting and ultimate recovery estimates representing the numerical reservoir simulation by a surrogate response surface model are discussed by friedmann, chawathe and larue (2001) and murtha et al. (2009), while dejean and blanc (1999) discuss doe, dividing the uncertain factors into uncontrollable and controllable and adapting doe accordingly, and law (2017) discuss the workflow for applying doe to simulation modelling. capturing all the uncertainties that the project may have and their impact on the output variables is of great importance in order to determine which data is worthwhile to acquire. in 1965, lotfi zadeh published the paper “fuzzy sets” where he describes the mathematics of fuzzy numbers and how fuzzy logic can be used to describe events with a partial degree of belonging to sets. founded on this work, bellman and zadeh (1970), lakoff (1978), dunn (1992), bezděk (1993, 2014), negoita and ralescu (1977), goguen (1967), bandler and kohout (1978), sugeno and murofushi (1987), sugeno and kang (1988), mizumoto and tanaka (1976, 1981), tanaka, taniguchi, and wang (1999), zimmermann and sebastian (1994), zimmermann (1996), etc. continue the development of the new theory. zadeh (1968) showed how fuzzy events could be described using fuzzy set theory. in 1971, zadeh published “quantitative fuzzy semantics”, where he developed the formal elements of the fuzzy logic and its applications. fuzzy inference is the process of mapping a set of input variables onto a set of output variables using fuzzy logic; in general, there are two ways of doing that: mamdani and sugeno, depending on the way the outputs are determined. the first fuzzy inference system (fis) was a fuzzy controller for a steam engine developed by assilian and mamdani (1974) where fuzzy logic was used to convert heuristic control rules into an automatic control strategy; the first real implementation of a fuzzy controller was made by lauritz peter holmblad, and jens-jørgen østergaard (1980), who developed the commercial system of fuzzy control working for f.l. smidth & co. in a cement factory in denmark (larsen, 1980; umbers and king, 1980), which resulted in one of the first successful tests runs on a full-scale industrial process. subsequent applications of fuzzy logic in several domains have been reported: the assessment of water quality in rivers (ocampo, 2008); improvements in the quality of image expansion (sakalli, yan and fu, 1999); the differential diagnosis of non-toxic thyropathy (guo and ling, 2008); the development of a fuzzy logic controller for a traffic junction (pappis and mamdani, 1997); the design of a sensor-based fire monitoring system for coal mines using fuzzy logic (muduli, jana and mishra, 2018); estimation of the impact of tax legislation reforms on potential tax (musayev, madatova, and rustamov, 2016); pipeline risk assessment (jamshidi et al., 2013); the vilela et al./decis. mak. appl. manag. eng. 3 (2) (2020) 97-118 102 diagnosis of depression (chattopadhyay, 2014); the assessment of predicted river discharge (jayawardena et al., 2014); calculation of geological strength indices and slope stability assessments (sonmez, gokceoglu and ulusay, 2004); regulation of industrial reactors (ghasem, 2006); the use of a fuzzy logic approach for file management and organization (gupta, 2011). similarly, in the oil and gas industry, fuzzy logic has been used for a streamline-based fuzzy logic workflow to redistribute water injection by accounting for operational constraints and number of supported producers in a pattern (bukhamseen et al., 2017); the identification of horizontal well placement (popa, 2013); estimating the strength of rock using fis (sari, 2016); and predicting the rate of penetration in shale formations (ahmed et al., 2019). fuzzy logic has been used in combination with other artificial intelligence techniques such as adaptative neuro-fuzzy inference system (anfis) in practical applications, e.g. to predict the inflow performance of vertical wells producing two-phase flow (basfar et al., 2018) or to predict geomechanical failure parameters (alloush et al., 2017); fis has also been used in conjunction with analytical hierarchical processes to evaluate the water injection performance in heterogeneous reservoirs (oluwajuwon and olugbenga, 2018) and to make decisions in the application of fuzzy inference systems for voi in the oil and gas industry (vilela, oluyemi, and petrovski, 2019b). from a methodological perspective, a fis can be understood as a general procedure that transforms a set of input variables into a set of outputs, following the dataflow shown in figure 2. figure 2. fuzzy inference system dataflow a holistic approach to assessment of value of information (voi) with fuzzy data and decision... 103 3. case study 3.1. reservoir information this case study is based on a clastic reservoir; four explorations and appraisal wells have already been drilled, the first three wells showing good production test results while the fourth well, located in the south of the reservoir, shows inferior results; these test results correlate well with the reservoir quality observed in the four wells; the differences in reservoir quality are attributed to diagenesis processes that occurred in the reservoir. figure 3 shows the four wells in the dynamic simulation model. figure 3. structural map of the field with the exploration and appraisal wells 3.2. project subsurface uncertainties the technical team agreed that six parameters are carrying most of the subsurface uncertainty of this project: i)horizontal permeability distribution (pxy), ii)vertical permeability (pze), iii)relative permeability (rpe), iv)aquifer strength (aqu), v)oilto-water contact (owc) and, vi)well productivity index (pi) value multiplier (wpi); these parameters and their range of uncertainty are shown in table 1. vilela et al./decis. mak. appl. manag. eng. 3 (2) (2020) 97-118 104 table 1. uncertain parameters: low, medium, and high values uncertain parameters low medium high horizontal permeability extended diagenesis medium case diagenesis local diagenesis vertical permeability (md) 0.01 0.50 10.00 relative permeability co=3.1 / cw=3.3 / sorw=0.15 co=2.5 / cw=4.4 / sorw=0.17 co=1.8 / cw=5.5 / sorw=0.20 aquifer strength, aqu vol. / aqi pi stb/(stb/d/psi) 9 2.5e / 217 11 2.52e / 434 13 2.52e / 868 oil/water contact (m) 1,070 1,075 1,080 well pi multiplier 0.90 8.90 18.40 the uncertainty associated with the distribution of the reservoir quality is captured by three different scenarios built to represent the high, medium and low cases for the property distribution to the south of the reservoir; due to the lack of data in this field, the range of variability in vertical permeability, relative permeability curves, and aquifer strength are taken from analogous fields. the range of values for oil-to-water contact is defined by the values observed in the three wells drilled, and the well pi multipliers are the figures used to history match the test results. the dynamic model used to generate the production profiles was made using the eclipse software (schlumbergertm). the operator company responsible for this field must decide whether to proceed with or to terminate, the project; however, the acquisition of new data can change the value of the project and impact that decision. acquiring data carries a cost and possible delay in the project start; these negative impacts may be worthwhile if compensated by the positive effects of risk reduction and an increase in the project’s value. 3.3. assessment of project value of information the assessment of the value of data acquisition starts with the screening phase, which consists of the identification of the input variables that have the most impact on the objective variable, which in this case is the utility of the net present value (unpv). in this case, study, having six input variables (the uncertain variables described in table 1), sixty-six dynamic simulation cases should be set and run (each variable is evaluated at its low and high values). figure 4 shows the cumulative oil production of these sixty-six simulations runs. a holistic approach to assessment of value of information (voi) with fuzzy data and decision... 105 figure 4. uncertainty in cumulative oil production the financial model used to evaluate the project benefits is built using excel software (windows office); this model includes the oil production forecast resulting from the simulation runs and the capex (capital expenditure or investment), opex (operational expenditure), oil price forecast; for this analysis, no other financial factor was included. as discussed by walls (2005), the utility function used is exponential, which in this case study will have a tolerance factor (tf) of $ 4,000 mm; this tf is representative of the company’s historic attitude toward risk for oil and gas exploitation projects. for a reference on utility function in the oil and gas industry, see vilela, oluyemi, and petrovski (2017). in figure 5, a pareto plot of the effects shows that the variables with the larger impact on the objective variable are a (owc), e (aqu), c (rep), b (pxy), ab (owc/pcy), and ac (owc/rep), where the last two correspond to the interaction effect of the first four variables; in conclusion, the most relevant parameters for the study are a, e, c and b, which correspond to owc, aqu, rep and pxy. vilela et al./decis. mak. appl. manag. eng. 3 (2) (2020) 97-118 106 figure 5. pareto chart of the effects of the parameters, with a significance level of 0.05 this interpretation is confirmed by using the normal plot, as shown in figure 6. figure 6. normal plot of the effects of parameters, with a significance level of 0.05 based on these four relevant variables already identified, sixteen dynamic models need to be evaluated corresponding to running each input variable to the low and high values while keeping the rest of variables at the medium level; the outcomes of those models should be further assessed in terms of values (npv, irr) and utility values (unpv, uirr). a holistic approach to assessment of value of information (voi) with fuzzy data and decision... 107 the technical team estimates the prior probabilities of occurrence for each of these sixteen cases. two alternatives for data acquisition are considered: i) drilling a new well and performing an extended well test, and ii) performing an extended well test on an existing well. 1) drilling a new well and performing an extended well test 2) by drilling a new well and performing an extended well test, the four uncertain input variables will be impacted; the new well should be located between the three wells with good properties and the well with bad properties; this well will de-risk the pxy distribution and the owc; in addition, a new core sample can be taken to assess the relative permeability; the extended well test will be used to obtain the aquifer parameters. 3) performing an extended well test on an existing well 4) by using one of the existing wells for performing an extended well test, only the uncertainty related to the aquifer strength can be investigated, keeping the remaining uncertainties at the same level as in the case without data acquisition. 5) bayes’ theorem should be applied to incorporate the value of the new data in the project value; to do that, reliability probability for all the combinations state-data outcome should be estimated, and those values are converted by means of the bayes’ theorem in the posterior probabilities, which are used for calculating the project value for each alternative. 6) in this research study, two different cases are assessed: the case where the data are treated as crisp, and the case where the data are treated as fuzzy. in the latter case, the uncertainty in the data due to imprecision is captured by using membership functions for doing that, three membership functions are defined: m1 or high, m2 or medium, and m3 or low. here, high means that the compound effect of data acquisition over the four variables is high, although in one or more variables that may not be the case. the same applies to medium and low membership functions. the value assigned to each compound state for each membership function describes the degree of membership that the compound state has in the respective membership function and, the compound value of the four variables in the membership function is the average value. tables 2a. and 2b. show the membership functions m1, m2 and m3 for each potential data outcome in the case of drilling a new well and performing an extended well test alternative. table 2a. membership functions for the first eight compound parameters for drilling a new well and performing an extended well test (hhhh) (hhhl) (hhlh) (hhll) (hlhh) (hlhl) (hllh) (hlll) m1 0.638 0.550 0.525 0.438 0.500 0.413 0.388 0.300 m2 0.250 0.263 0.275 0.288 0.250 0.263 0.275 0.288 m3 0.113 0.188 0.200 0.275 0.250 0.325 0.338 0.413 vilela et al./decis. mak. appl. manag. eng. 3 (2) (2020) 97-118 108 table 2b. membership functions for the last eight compound parameters for drilling a new well and performing an extended well test (lhhh) (lhhl) (lhlh) (lhll) (llhh) (llhl) (lllh) (llll) m1 0.525 0.438 0.413 0.325 0.388 0.300 0.275 0.188 m2 0.263 0.275 0.288 0.300 0.263 0.275 0.288 0.300 m3 0.213 0.288 0.300 0.375 0.350 0.425 0.438 0.513 for the case of using an existing well, the membership functions m1, m2 and m3 corresponding to high, medium and low are calculated. in this case (using an existing well), the only parameter that is evaluated is the aquifer strength. the tables 3a. and 3b. show the value assigned to each compound state within the three membership functions, which reflects the degree of membership that the state has in the corresponding membership function for the “performing an extended well test on an existing well” alternative. table 3a. membership functions for the first eight compound parameters for performing an extended well test on an existing well (hhhh) (hhhl) (hhlh) (hhll) (hlhh) (hlhl) (hllh) (hlll) m1 0.700 0.700 0.700 0.700 0.150 0.150 0.150 0.150 m2 0.200 0.200 0.200 0.200 0.200 0.200 0.200 0.200 m3 0.100 0.100 0.100 0.100 0.650 0.650 0.650 0.650 table 3b. membership functions for the last eight compound parameters for performing an extended well test on an existing well (lhhh) (lhhl) (lhlh) (lhll) (llhh) (llhl) (lllh) (llll) m1 0.700 0.700 0.700 0.700 0.150 0.150 0.150 0.150 m2 0.200 0.200 0.200 0.200 0.200 0.200 0.200 0.200 m3 0.100 0.100 0.100 0.100 0.650 0.650 0.650 0.650 in the decision phase, on the top of the unpv already used in the screening phase, the internal rate of return (irr) and its utility value (uirr) are used. the fis was built using matlab® r2015a software with triangular and truncated triangular functions. the values involved in the decision are unpv and uirr, and their fuzziness is represented with three membership functions for each one: unpv_ high, unpv_medium, unpv_low, uirr_high, uirr_medium, and uirr_low. the decision options are “to endorse”, “not to endorse” or “to reframe” the project. if…then rules are designed to reflect the imprecision in the decision process. for this case study, nine rules were created, as shown in table 4. a holistic approach to assessment of value of information (voi) with fuzzy data and decision... 109 table 4. decision-making rules for the fis decision rules # if then rule 1 (unpv is unpv_low) and (uirr is uirr_high) (decision is reframing) rule 2 (unpv is unpv_low) and (uirr is uirr_medium) (decision is no_endorsement) rule 3 (unpv is unpv_low) and (uirr is uirr_low) (decision is no_endorsement) rule 4 (unpv is unpv_medium) and (uirr is uirr_high) (decision is endorsement) rule 5 (unpv is unpv_medium) and (uirr is uirr_medium) (decision is reframing) rule 6 (unpv is unpv_medium) and (uirr is uirr_low) (decision is no_endorsement) rule 7 (unpv is unpv_high) and (uirr is uirr_high) (decision is endorsement) rule 8 (unpv is unpv_high) and (uirr is uirr_medium) (decision is endorsement) rule 9 (unpv is unpv_high) and (uirr is uirr_low) (decision is reframing) 3.4. case study results expected value assessment using crisp and fuzzy data 1) the expected value for drilling a new well and performing an extended well test data acquisition table 5 shows the results of the evaluation for the case of drilling a new well and performing an extended well test table 5. expected value assessment for drilling a new well and performing an extended well test data acquisition proposal values no data crisp data fuzzy data npv (mm $) 3.02 12.19 −9.78 irr (%) −2.30 −2.49 −2.49 unpv −0.0069 0.0006 −0.0074 uirr −0.2536 0.2665 −0.2741 unpv analysis when unpv is used as a decision criterion, table 6 shows that when the classical methodology has used the value of “drilling a new well and performing an extended well test alternative is higher than the value of “do not acquire data” alternative; however, when the fuzzy nature of the data is included in the analysis, the value of “drilling a new well and performing an extended well test” alternative is reduced, and indeed the “no data acquisition” alternative is better than data acquisition. this change in the decision when the fuzzy characteristics of the data are included in the analysis is maintained in the case of using values instead of utility values. uirr analysis vilela et al./decis. mak. appl. manag. eng. 3 (2) (2020) 97-118 110 when uirr is used as a decision criterion, the “drilling a new well and performing an extended well test” alternative has a lower value than the value of “do not acquire data” alternative in both cases, crisp and fuzzy data; and this assessment holds in case values are used instead of utilities. 2) the expected value for performing an extended well test on an existing well data acquisition for this alternative, table 6 shows the results of the evaluation. table 6. expected value assessment for performing an extended well test on an existing well data acquisition proposal values no data crisp data fuzzy data npv (mm $) 3.02 98.04 97.31 irr (%) −2.30 −1.53 −1.53 unpv −0.0069 0.0197 0.0174 uirr −0.2536 −0.1752 −0.1798 unpv analysis using unpv as a decision criterion, and with the classical methodology for the data acquisition case of performing an extended well test on an existing well, the data acquisition alternative has a higher value than the value of ”do not acquire data” alternative whether the data is crisp or fuzzy. the same conclusion is reached using values instead of utilities. uirr analysis when the uirr is used as a decision criterion to assess the case of performing an extended well test on an existing well, the classical methodology shows that the best project is the data acquisition alternative, because both crisp and fuzzy acquisition have higher values than the value of “do not acquire data” alternative; a similar conclusion is reached when values are used instead of utilities. fuzzy inference system assessment using crisp and fuzzy data tables 7 and 8 show the outcomes of the fuzzy inference assessments of the case of drilling a new well and performing an extended well test and performing an extended well test on an existing well. table 7. fis assessment for drilling a new well and performing an extended well test data acquisition proposal values no data crisp data fuzzy data fis values −0.217 −0.170 −0.359 fis utility values −0.274 −0.268 −0.289 table 8. fis assessment for performing an extended well test on an existing well data acquisition proposal values no data crisp data fuzzy data fis values −0.217 0.444 0.444 fis utility values −0.274 −0.171 −0.178 a holistic approach to assessment of value of information (voi) with fuzzy data and decision... 111 considering the results shown in table 7 for drilling a new well and performing an extended well test, using crisp data, both values and utility values (bring about through the utility function) indicate the best alternative is “acquire data” or drill the well and perform an extended well test”; however, when the fuzzy characteristics of the data is included in the assessment, the best alternative switched to “do not acquire data”. table 8 shows that for “performing an extended well test on an existing well” alternative, both objective functions, fis values, and fis utility values, indicate that the best alternative is “acquire data” or “perform an extended well test on an existing well”. the inclusion of the fuzzy characteristics of the data in the analysis does not change the results. 4. conclusions and recommendations the inclusion of the fuzzy characteristics of the data that deal with aleatoric, but also affect epistemic uncertainties, in the voi assessment is very important because it can have a significant impact on the final decisions. in the case study discussed in this paper for “drilling a new well and performing an extended well test” alternative, the decision switched from “do not acquire data” to “acquire data” when the fuzzy data nature of the data is included in the analysis. it was observed that in “performing an extended well test on an existing well” alternative, that switch does not occur. that happens because of two reasons: i) the difference in values and utility values between the two alternatives: “performing an extended well test on an existing well” and “do not acquire data” is not large and, ii) the degree of fuzziness or the level of vagueness assigned to the data as described by the membership functions. in general, it is observed that when the fuzzy characteristic of the data is included in the analysis, the value of the data acquisition is reduced. using a fuzzy inference system allows for the aggregation of two or more decision criteria (npv, irr, etc.) within only one decision criterion that summarizes the result of the assessment; the several decision criteria can be weighted as desired into the fis. doe is a robust theory suitable for analysis of voi problems and steering the decision process for selecting the data acquisition actions that provide the optimum value to the project; proceeding in this way ensures that the decision process fits the needs of the oil and gas industry. however, the membership functions and utility functions still carry a large degree of subjectivity and further work is required to assess the level of subjectivity and how this might impact voi analysis. in the near future, additional research efforts should be dedicated to the use of machining learning to support the decision-making process by integrating the normative and descriptive elements of the decision process in a coherent and rational manner. author contributions: each author has participated and contributed sufficiently to take public responsibility for appropriate portions of the content. funding: this research received no external funding. conflicts of interest: the authors declare no conflicts of interest. vilela et al./decis. mak. appl. manag. eng. 3 (2) (2020) 97-118 112 references ahmed, a., elkatatny, s., ali, a., mahmoud, m. & abdulraheem, a. (2019). rate of penetration prediction in shale formation using fuzzy logic. in: proceedings of international petroleum technology conference, 26–29 march 2019. beijing, china: society of petroleum engineers. spe 19548-ms. alloush, r., elkatatny, s., mahmoud, m., moussa, t., ali, a. & abdulraheem, a. (2017). estimation of geomechanical failure parameters from well logs using artificial intelligence techniques. in: proceedings of kuwait oil & gas show and conference, 15–18 october 2017. kuwait city, kuwait: society of petroleum engineers. spe 187625-ms. assilian, s. & mamdani, e. (1974). an experiment in linguistic synthesis with a fuzzy logic controller. international journal of man-machine studies, 7, pp. 1–13. reprint: international journal of human-computer studies, 1999, 51, pp. 135–147. bandler, w. & kohout, l. (1978). fuzzy relational products and fuzzy implication operators. in: proceedings of international workshop on fuzzy reasoning theory and applications. september 1978. london, uk. queen mary college, university of london. basfar, s., baarimah, s., elkatany, s., al-ameri, w., zidan, k. & al-dogail, a. (2018). using artificial intelligence to predict ipr for vertical oil well in solution gas drive reservoirs: a new approach. in: proceedings of annual technical symposium and exhibition, 23–26 april 2018. dammam, saudi arabia, kingdom of saudi arabia: society of petroleum engineers. spe 192203-ms. bellman, r. & zadeh, l. (1970). decision-making in a fuzzy environment. management science, 17(4), pp. b-141–b-164. bezdek, j. (1993). fuzzy models—what are they, and why? ieee transactions on fuzzy logic systems, 1(1), pp. 1–6. bezdek, j. (2014). using fuzzy logic in business. procedia social and behavioral sciences, 124, pp. 371–380. box, g. & draper, n. (1987). empirical model-building and response surfaces. new york, usa: john wiley & sons. box, g., hunter, w. & hunter, j. (1978). statistics for experimenters. an introduction to design, data analysis and model building, 2nd ed. hoboken, new jersey, usa: john wiley & sons. box, g.e.p. & wilson, k.b. (1951). on the experimental attainment of optimum conditions. journal of the royal statistical society, series b, 13, pp. 1–45. bratvold, r., bickel, j., & lohne, h. (2007). value of information in the oil and gas industry: past, present, and future. in: proceedings of annual technical conference and exhibition. 11–14 november 2007. anaheim, california, usa: society of petroleum engineers. spe 110378-ms. bukhamseen, n., al-najem, a., saffar, a. & ganis, s. (2017). an injection optimization decision-making tool using streamline based fuzzy logic workflow. in: proceedings of reservoir characterization conference and exhibition, 8–10 may 2017. abu dhabi, uae: society of petroleum engineers. spe 186021-ms. a holistic approach to assessment of value of information (voi) with fuzzy data and decision... 113 cervantes, m.j., & engstrom, t.f. (2004). factorial design applied to cfd. journal of fluids engineering, 126(5), pp. 791–798. chattopadhyay, s. (2014). a neuro-fuzzy approach for the diagnosis of depression. applied computing and informatics (2017) 13, pp. 10–18. corre, b., de feraudy, v. & vincent, g. (2000). integrated uncertainty assessment for project evaluation and risk analysis. in: proceedings of european petroleum conference. 24–25 october 2000. paris, france: society of petroleum engineers. spe 65205-ms. damsleth, e., hage, a., & volden, r. (1992). maximum information at minimum cost: a north sea field development study with an experimental design. in: proceedings of offshore europe conference. 3–6 september 1992. aberdeen, uk: society of petroleum engineers. spe 23139-pa. dejean, j. & blanc, g. (1999). managing uncertainties on production predictions using integrated statistical methods. in: proceedings of annual technical conference and exhibition. 3–6 october 1999. houston, texas, usa: society of petroleum engineers. spe 56696-ms. demirmen, f. (1996). use of “value of information” concept in justification and ranking of subsurface appraisal. in: proceedings of annual technical conference and exhibition. 6–8 october 1996. denver, usa: society of petroleum engineers. spe 36631-ms. dougherty, e. (1971). the oilman’s primer on statistical decision theory. society of petroleum engineers library (unpublished). spe 3278-ms. du, z., yang, j., yao, z. & xue, b. (2002). modeling approach of regression orthogonal experiment design for the thermal error compensation of a cnc turning center. journal of materials processing technology, 129(1–3), pp. 619–623. dunn, m. (1992). a method to estimate the value of well log information. in: proceedings of annual technical conference and exhibition. 4–7 october 1992. washington, d.c., usa: society of petroleum engineers. spe 24672-ms. egelan, t., hatlebakk, e., holden, l. & larsen, e. (1992). designing better decisions. in: proceedings of european petroleum computer conference. 25–27 may 1992. stavanger, norway: society of petroleum engineers. spe 24275-ms. farhang-mehr, a. & azann, s. (2005). bayesian meta-modeling of engineering design simulations: a sequential approach with adaptation to irregularities in the response behavior. international journal for numerical methods in engineering, 62(15), pp. 2104–2126. fisher, r. (1973). statistical methods and scientific inference, 3rd ed. new york, usa: macmillan. friedmann, f., chawathe, a., & larue, d. (2001). assessing uncertainties in channelized reservoirs using experimental design. in: proceedings of annual technical conference and exhibition. 30 september–3 october 2001. new orleans, louisiana, usa: society of petroleum engineers. spe 71622-ms. vilela et al./decis. mak. appl. manag. eng. 3 (2) (2020) 97-118 114 galantucci, l., percoco, g., & spina, r. (2003). evaluation of rapid prototypes obtained from reverse engineering. proceedings of the institution of mechanical engineers part b: journal of engineering manufacture, 217(11), pp. 1543–1552. gerhardt, j. & haldorsen, h. (1989). on the value of information. in: proceedings of offshore europe 89. aberdeen, uk: society of petroleum engineers. spe 19291-ms. ghasem, n. (2006). design of a fuzzy logic controller for regulating the temperature in industrial polyethylene fluidized bed reactor. chemical engineering research and design, 84(2), pp. 97‒106. goguen, j. (1967). l-fuzzy sets. journal of mathematical analysis and applications, 18, pp. 145–174. gou, y. & ling, j. (2008). fuzzy bayesian conditional probability model and its application in differential diagnosis of non-toxic thyropathy. in: 2008 2nd international conference on bioinformatics and biomedical engineering, shanghai, ieee, pp. 1843–1846 grayson, c.j. (1960). decisions under uncertainty. drilling decisions by oil and gas operators. boston, massachusetts, usa: harvard university. gupta, n., abhinav, k. & basava, a. (2011). fuzzy file management. in: 2011 3rd international conference on electronic computer technology (icet 2011), vol. 1, pp. 225–228. hoipkemeier-wilson, l., schumacher, j., carman, m., gibson, a., feinberg, a., callow, e., finlay, j., callow, j. & brennan, a. (2004). antifouling potential of lubricious, microengineered, pdms elastomers against zoospores of the green fouling alga ulva (enteromorpha). biofouling, 20(1), pp. 53–63. jamshidi, a., yazdani-chamzini, a. & yakhchali, s. (2013). developing a new fuzzy inference system for pipeline risk assessment. journal of loss prevention in the process industries, 26, pp. 197‒208. jayawardena, a., perera, e., zhu, b., amarasekara, j., & vereivalu, v. (2014). a comparative study of fuzzy logic systems approach for river discharge prediction. journal of hydrology, 514, pp. 85‒101. koninx, j. (2000). value-of-information. from cost-cutting to value-creation. in: proceedings of asia pacific oil conference and exhibition. 16–18 october 2000. brisbane, australia: society of petroleum engineers. spe 64390-ms. kullawan, k., bratvold, r. & bickel, j. (2014). value creation with multi-criteria decision making in geosteering operations. in: proceedings of hydrocarbon economics and evaluation symposium. 19–20 may 2014. houston, texas, usa: society of petroleum engineers. spe 169849-ms. lakoff, g. (1978). some remarks on ai and linguistics. cognitive science, 2, pp. 267– 275. larsen, e., kristoffersen, s. & egeland, t. (1994). functional integration in the design and use of a computer-based system for design of statistical experiments. in: proceedings of european petroleum computer conference. 15–17 march 1994. aberdeen, scotland, uk: society of petroleum engineers. spe 27585-ms. a holistic approach to assessment of value of information (voi) with fuzzy data and decision... 115 larsen, p. (1980). industrial application of fuzzy logic control. international journal of man-machine studies, 12, pp. 3–10. law, a. & kelton, w. (1991). simulation modeling and analysis, 2nd ed. new york, usa: mcgraw-hill. law, a. (2015). simulation modeling and analysis, 5th ed. new york, usa: mcgrawhill. law, a. (2017). a tutorial on the design of experiments for simulation modeling. in: proceedings of the 2017 winter simulation conference, savanah, ga, ieee, pp. 550– 564. liao, h.c. (2003). using pcr-tops1s to optimize taguchi’s multi-response problem. international journal of advanced manufacturing technology, 22(9–10), pp. 649– 655. lohrenz, j. (1988). net values of our information. journal of petroleum technology, 40(4), pp. 499–503. spe 16842-pa. mizumoto, m. & tanaka, k. (1976). some properties of fuzzy sets of type 2. information and control, 31, pp. 312–340. mizumoto, m. & tanaka, k. (1981). fuzzy sets and their operations. information and control, 48, pp. 30–48. montgomery, d.c. (2005). design and analysis of experiments, 6th ed. new york, usa: john wiley & sons. moras, r., lesso, w. & macdonald, r. (1987). assessing the value of information provided by observation wells in gas storage reservoirs. society of petroleum engineers, library (unpublished). spe 17262-ms. muduli, l., jana, p. & prasad, d. (2018). wireless sensor network-based fire monitoring in underground coal mines: a fuzzy logic approach. process safety and environmental protection, 113, pp. 435‒447. murtha, j., osorio, r., perez, h., kramer, d., skinner, r., & williams, c. (2009). experimental design: three contrasting projects. in: proceedings of latin american and caribbean petroleum engineering conference. 31 may–3 june 2009. cartagena, colombia: society of petroleum engineers. spe 121878-ms. musayev, a., madatova, sh. & rustamov, s. (2016). evaluation of the impact of the tax legislation reforms on the tax potential by fuzzy inference method. procedia computer science, 102, pp. 507‒514. myers, r. & montgomery, d. (2002). response surface methodology. process and product optimization using designed experiments. new york, usa: john wiley & sons. nataraj, m., arunachalam, v.p. & dhandapani, n. (2005). optimizing diesel engine parameters for low emissions using taguchi method variation risk analysis approach. part 1. indian journal of engineering and materials sciences, 12(3), pp. 169–181. negoita, c. & ralescu, d.s. (1977). applications of fuzzy sets to systems analysis. basel: birkhäuser. newendorp, p. (1967). application of utility theory to drilling investment decisions. ph.d. thesis. the university of oklahoma, usa. vilela et al./decis. mak. appl. manag. eng. 3 (2) (2020) 97-118 116 newendorp, p. (1972). bayesian analysis — a method for updating risk estimates. journal of petroleum technology, 24(2), pp. 193–198. spe 3263-pa. ocampo, w. (2008). on the development of decision-making systems based on fuzzy models to assess water quality in rivers. ph.d. thesis, universitat rovira i virgili, italy. ogle, t. & hornberger, l. (2001). technical note: reduction of measurement variation: small acoustic chamber screening of hard disk drives. noise control engineering journal, 49(2), pp. 103–107. oluwajuwon, i. & olugbenga, f. (2018). evaluation of water injection performance in heterogeneous reservoirs using analytical hierarchical processing and fuzzy logic. in: proceedings of nigerian annual international conference and exhibition, 6–8 august 2018. lagos, nigeria: society of petroleum engineers. spe 193386-ms. pappis, c. & mamdani, e. (1997). a fuzzy logic controller for a traffic junction. ieee transactions on systems, man and cybernetics, 7(10), pp. 707–717. passmore, m.a., patel, a. & lorentzen, r. (2001). the influence of engine demand map design on vehicle perceived performance. international journal of vehicle design, 26(5), pp. 509–522. peake, w., abadah, m. & skander, l. (2005). uncertainty assessment using experimental design: minagish oolite reservoir. in: proceedings of reservoir simulation symposium. 31 january–2 february 2005. houston, texas, usa: society of petroleum engineers. spe 91820-ms. popa, a. (2013). identification of horizontal well placement using fuzzy logic. in: proceedings of annual technical conference and exhibition, 30 september–2 october 2013. new orleans, louisiana, usa: society of petroleum engineers. spe 166313-ms. raiffa, h. & schlaifer, r. (1961). applied statistical decision theory. boston, massachusetts, usa: harvard university. raiffa, h. (1968). decision analysis: introductory lectures on choices under uncertainty. reading, massachusetts, usa: addison-wesley. ruotolo, l.a.m. & gubulin, j.c. (2005). a factorial-design study of the variables affecting the electrochemical reduction of cr (vi) at polyaniline-modified electrodes. chemical engineering journal, 110(1–3), pp. 113–121. sacks, j., welch, w., mitchell, t. & wynn, h. (1989). design and analysis of computer experiments. statistical science, 4(4), pp. 409–435. sakalli, m., yan, h. & fu, a. (1999). a fuzzy bayesian approach to image expansion. in: ijcnn 1999, international joint conference on neural networks, ieee, washington, dc, usa, pp. 2685‒2689. sari, m. (2016). estimating strength of rock masses using a fuzzy inference system. in: rock mechanics and rock engineering: from the past to the future. taylor & francis group, london schlaifer, r. (1959). analysis of decisions under uncertainty. new york, usa: mcgraw-hill. silbergh, m. & brons, f. (1972). profitability analysis — where are we now? journal of petroleum technology, 24(1), pp. 90–100. in: proceedings of 45th annual fall a holistic approach to assessment of value of information (voi) with fuzzy data and decision... 117 meeting. 4–7 october 1972. houston, usa: society of petroleum technology. spe 2994-pa. sjoblom, j., papadakis, k., creaser, d., & odenbrand, i. (2005). use of experimental design in the development of a catalyst system. catalysis today, 100(3–4), pp. 243– 248. sonmez, h., gokceoglu, c., & ulusay, r. (2004). a mandani fuzzy inference system for the geological strength index (gsi) and its use in slope stability assessment. international journal of rock mechanics and mineral science, 41(3), pp. 780‒785. stibolt, r. & lehman, j. (1993). the value of a seismic option. in: proceedings of hydrocarbons economics and evaluation symposium. 29–30 march 1993. dallas, texas, usa: society of petroleum engineers. spe 25821-ms. suffield, r.m, dillman, s.h. & haworth, j.e. (2004). evaluation of antioxidant performance of a natural product in polyolefins. journal of vinyl and additive technology, 10(1), pp. 52–56. sugeno, m. & kang, g. (1988). fuzzy modeling and control of multilayer incinerator. fuzzy sets and systems, 25, pp. 259–260. sugeno, m. & murofushi, t. (1987). pseudo-additive measures and integrals. journal of mathematics analysis and applications, 122, pp. 197–222. tanaka, k., taniguchi, t., & wang, h. (1999). robust and optimal fuzzy control: a linear matrix inequality approach. in: proceedings of 14th triennial international federation of automatic control (ifac) world congress. 5–9 july 1999. beijing, p.r. china, pp. 5380–5385. telford, j. (2007). a brief introduction to design of experiments. johns hopkins apl technical digest, 27(3), pp. 224–232. tong, k.w., kwong, c.k. & yu, k.m. (2004). process optimization of transfer moulding for electronic packages using artificial neural networks and multi-objective optimization techniques. international journal of advanced manufacturing technology, 24(9–10), pp. 675–685. umbers, i. & king, p. (1980). an analysis of human decision-making in cement kiln control and the implications for automation. international journal of man-machine studies, 12, pp. 11–23. venkataraman, r. (2000). application of the methods of experimental design to quantify uncertainty in production profiles. in: proceedings of asia pacific conference on integrated modelling for asset management. 25–26 april 2000. yokohama, japan: society of petroleum engineers. spe 59422-ms. vilela, m., oluyemi, g. & petrovski, a. (2017). value of information and risk preference in oil and gas exploration and production projects. in: proceedings of annual caspian technical conference and exhibition. 1–3 november 2017. baku, azerbaijan: society of petroleum engineers. spe 189044-ms. vilela, m., oluyemi, g. & petrovski, a. (2018). fuzzy data analysis methodology for the assessment of value of information in the oil and gas industry. in: proceedings of 2018 ieee international conference on fuzzy systems (fuzz-ieee), rio de janeiro, brazil, july 9–13, 2018, pp. 1540–1546. vilela et al./decis. mak. appl. manag. eng. 3 (2) (2020) 97-118 118 vilela, m., oluyemi, g. & petrovski, a. (2019a). a fuzzy inference system applied to the value of information assessment for the oil and gas industry. decision making: applications in management and engineering, 2(2), pp. 1‒18. vilela, m., oluyemi, g. & petrovski, a. (2019b). fuzzy logic applied to value of information assessment in oil and gas projects. petroleum science, 16(5), pp. 1208– 1220. wang, f. & white, ch. (2002). designed simulation for a detailed 3d turbidite reservoir model. in: proceedings of gas technology symposium. 30 april–2 may 2002. calgary, alberta, canada: society of petroleum engineers. spe 75515-ms. warren, j. (1983). development decision: value of information. in: proceedings of hydrocarbon economics and evaluation symposium of the society of petroleum engineers of aime. 3–4 march 1983. dallas, texas: society of petroleum engineers. spe 11312-ms. white, ch., willis, b., narayanan, k. & dutton, sh. (2001). identifying and estimating significant geologic parameters with experimental design. in: proceedings of annual technical conference and exhibition. 1–4 october 2001. dallas, texas, usa: society of petroleum engineers. spe 74140-pa. yang, c., bi, x.y. & mao, z.s. (2002). effect of reaction engineering factors on biphasic hydroformylation of 1-dodecane catalyzed by water-soluble rhodium complex. journal of molecular catalysis a: chemical, 187(1), pp. 35–46. zadeh, l. (1965). fuzzy sets. information and control, 8, pp. 338–353. zadeh, l. (1968). probability measures of fuzzy events. journal of mathematical analysis and applications, 23, pp. 421–427. zadeh, l. (1971). quantitative fuzzy semantics. information science journal, 3(2), pp. 159–176. zang, t. & green, l. (1999). multidisciplinary design optimization techniques: implications and opportunities for fluid dynamics research. in: proceedings of 30th aiaa fluid dynamics conference, norfolk, virginia, usa. june 28–july 1, pp. 1–20. paper aiaa-99-3708. zimmermann, h. & sebastian, h. (1994). fuzzy design-integration of fuzzy theory with knowledge-based system-design. in: proceedings of ieee 3rd international fuzzy systems conference, 1, pp. 352–357. zimmermann, h. (1996). fuzzy logic on the frontiers of decision analysis and expert systems. in: proceedings of the 1996 biennial conference of the north american fuzzy information processing society – nafips, berkeley, ca, usa, june 19–22, 1996, pp. 65–69. © 2018 by the authors. submitted for possible open access publication under the terms and conditions of the creative commons attribution (cc by) license (http://creativecommons.org/licenses/by/4.0/). plane thermoelastic waves in infinite half-space caused decision making: applications in management and engineering vol. 4, issue 1, 2021, pp. 174-193. issn: 2560-6018 eissn: 2620-0104 doi: https://doi.org/10.31181/dmame2104174s logistics performances of gulf cooperation council’s countries in global supply chains ilija stojanović1 and adis puška2 1 college of business studies, al ghurair university, dubai, united arab emirates 2 institutes for scientific research and development, brčko district, bosnia and herzegovina received: 10 january 2021; accepted: 7 march 2021; available online: 13 march 2021. original scientific paper abstract: regional integration into the gulf cooperation council has enabled respective countries to effectively participate in global supply chains. to ensure effective integration of this region into global supply chains, logistics operations are a very important determinant. the aim of this study was to assess logistical performances of gcc countries, and to identify which country has the best conditions for establishing a regional logistic center. for this study, we used relevant data from logistics performance index (lpi) developed by the world bank. the research was conducted using a hybrid multi-criteria approach based on the critic and mabac methods. the findings of this study indicate that the united arab emirates has the best conditions for establishing a regional logistics center. this study also releveled the areas of logistics in which other gcc countries should make an improvement to improve their logistical performance. keywords: logistics center; logistics performance; global supply chains; gcc countries; multi-criteria analysis. 1. introduction global competitive pressure is forcing countries to strengthen their position in the world market through regional integration. with trade and customs agreements individual countries have been enabled to improve their competitive position within a single regional market towards other regions and countries globally. this was also the incentive for the gulf countries to establish a cooperation council for the arab states of the gulf in 1981, also known as gulf cooperation council (gcc), composed by bahrain, kuwait, oman, qatar, saudi arabia, and the united arab emirates. gulf integration has enabled facilitation of the movement of production, removing trade barriers, and coordinating economic policies, extending the size of the market for the estimated 35.65 million inhabitants who live in this region (fernandes and rodrigues, logistics performances of gulf cooperation council’s countries in global supply chains 175 2009). moreover, it has created the preconditions for establishment of supply chains with the aim of joint gcc exposure on the global market. according to the statistical centre for the cooperation council for the arab countries of the gulf (gcc-stat), total export of gcc countries was around 652 billion of usd in 2018 and rising. well known fact is that oil export is one of the key trade operations, but many other products take an important role in export activities of the gcc region. having this in mind, durugbo et al. (2020) pointed out the strategic global importance for supply chains for these countries. these scholars found that supply chains in the gcc region confront 3 main complexity management challenges including “strategically selecting and integrating network resources’, ‘reliably contracting and delivering high-quality solutions’, and ‘cost effectively controlling and financing operational expansions” (durugbo et al., 2020, p.13). they also proposed to gcc-based companies to work closely in enabling optimization of their export activities to maximize competitiveness and minimize operational risks and uncertainty. to create an effective supply chain, appropriate logistics operations are crucial. according to christopher (2017, p.4), “effective logistics and supply chain management can provide a major source of competitive advantage”. having in mind global market game and the necessity of gcc countries to be included effectively into global supply chains, we focused our academic curiosity to logistical performances of gcc countries. our main goal of this study is to see which gcc country provides the best conditions in terms of logistics to enable the gcc region to be effectively included into global supply chains. this study provides insight into areas of logistics for each gcc country where improvement is needed to enable more effective logistics operations. the selection of the logistics center was done using the logistics performance index (lpi) data developed by the world bank for the time periods 2012, 2014, 2016 and 2018. with the purpose of ranking gcc countries from their logistics performance, a combination of critic (criteria importance through intercriteria correlation) and mabac (multi-attributive border approximation area comparison) were applied. the critic method was used to determine the weight of the criteria in an objective way, while the mabac method was used to rank these countries. this approach allowed determination which of the gcc countries has the best characteristics in lpi over different time periods. this approach addressed the following questions: a) can combinations of mcda methods be used when choosing a logistics center? b) does the ranking of the gcc countries differ throughout different time periods? c) which of the gcc countries has the best lpi characteristics to be proposed for a joint logistics center? the contribution of this approach one can be found in the new way of ranking countries for other regions to determine those with the best lpi characteristics. thus, this study has paved the way for future investigation with the similar approach in other regions with the aim of selecting the logistics center's best location according to the country's logistics performances. in addition to the introduction section, this paper is organized as follows. section two is intended for literature review. in the third section the research methodology is explained, and the mcda methods to be used in this study. the fourth section is intended for research results and for the analysis of the obtained results. the fifth stojanović and puška/decis. mak. appl. manag. eng. 4 (1) (2021) 174-193 176 section is focused to discuss the obtained results, while the sixth selection is intended to conclude the obtained research results. 2. literature review global competition is a major characteristic of today’s marketplace where the race for a better position is constant. this is not only the race between companies, but also between countries that need constantly to evaluate their competitive position (önden at al., 2018). modern time is also characterized with dramatic increase in trade across borders (akkermans at al., 1999). according to kishore and padmanabhan (2016), globalization and global competition indicate the great importance of the logistics industry. however, klassen and whybark (1994), based on their study, found that the complexity of global logistics is one of key barriers to the effective management of international operations. with globalization, global supply chains become highly significant. „global supply chains are a mechanism by which firms can achieve a competitive advantage” (sundarakani at al., 2012, p.2). according to reyes et al. (2002), perceptive firms are increasingly pursuing global supply chain operations to reduce costs. some scholars identify some differences between logistics and supply chains (larson and halldorsson, 2004). according to memedovic et al. (2008, p. 355), “logistics commonly refers to organizing and coordinating the movements of material inputs, final goods and their distribution.” pham et al. (2017) argue that logistics is an important element of supply chains, putting focus especially on logistics centers. we especially emphasize the claim by stević et al. (2015), that logistical centers are key elements of logistical network. according to their opinion, the entire logistics system relies on logistics centers that have integrative function within logistics systems. zaralı and yazgan (2016) highlighted that logistics centers have key roles in streaming transport operation at national and international level; the selection of their position is of particular importance for their effectiveness and efficiency. one of the most rapid developing world regions by increasing worldwide circulation of commodities is the region of gulf cooperation council (gcc) countries which become a central node in global trade (ziadah, 2018). this region is composed by six araab countries: the kingdoms of bahrain and saudi arabia, the sultanate of oman, the states of kuwait and qatar, and the united arab emirates. durugbo et al. (2020) estimated that this region accounts for around 30% of the globally known oil reserves. the gcc region also has strategic geographic position along the asia–europe trade route. according to ziadah (2018), authorities in this region have recognized the possibility of economic diversification by making huge investments into logistics infrastructure: maritime ports, roads, rail, airports and logistics cities, and yet is to come from gcc development plans. fernandes and rodrigues (2009) particularly emphasized the importance of special economic zones that have been established as an instrument to boost employment, export, and foreign exchange. according to them, countries within this region are positioning themselves to be logistic hubs by strengthening transport, and connectivity, and this can lead to attracting foreign investments. durugbo et al. (2020) provided great insight into the existing literature of the supply chain management of the gcc region and found high levels of complexity and uncertainty within this regional business environment. one of the complexities found by these authors is related to strategically selecting and integrating network resources logistics performances of gulf cooperation council’s countries in global supply chains 177 within the gcc region, focusing attention on the views of multinational companies towards regional supply chains. according to these authors, those multinational companies located in the gcc region are very focused on regional supply chains. according to memedovic et al. (2008), oil-producing countries, with exception of the united arab emirates and bahrain, perform below their potential and their logistics systems usually focus on their main export commodities rather than focusing on diversification on trade logistics. these authors pointed to an example of dubai ports world that has become one of the most important global port operators, operating 42 port terminals in 27 countries. memedovic at al. (2008) also pointed out that countries with better logistics capabilities can attract more foreign direct investments, decrease transaction costs, diversify export structure, and have higher growth. very important issues in managing logistics operations arise among scholars. one of these issues, as stated by akkermans et al. (1999) is related to managing good flows between facilities in a chain of operations, thus putting focus on the importance of coordinated planning approach that can reduce costs. several scholars warned of the need to have an appropriate coordination in decision making on the design of international facility networks (scully and fawcett, 1993; meijboom and vos, 1997). coe at al. (2004) argued that with establishment of the global commodity chain approach, the importance of regions in economic activities arises. önden et al. (2018) argued that the location of the logistics centers is a key element of the transport system and location decisions should be done strategically. otherwise, opposite decisions could increase costs and create transport bottlenecks. however, due to undoubted advantages for the economy, regional authorities want their region to be considered for logistical centers and this could lead to rising logistics costs, increasing travel distances by trucks, and lacking multi-modal transportation possibilities. after analyzing the situation in the gcc region, ziadah (2018) found a large degree of duplication in port infrastructure in this part of the world. thus, analyzing which country in the gcc region provides the highest benefit for the economy of the region is fully justified and we are going to do this with this study. this is especially important due to the necessity to build long-term relationships between regions, which are according to li et al. (2011) critical factor to establish a successful logistics system. complex system of global value chains is dependent on efficient logistics (memedovic et al., 2008). thus, location of logistics centers has become an imperative of logistics and supply chain management because it contributes to the efficiency of supply chain (rao et al., 2015). memedovic et al. (2008) argued that characteristics of each supply chain logistics will affect decisions about the advantages and disadvantages of different locations, and especially costs, transport access, business environment for round-the-clock operations leads to a variety of location strategies. having in mind a trend of moving production in different global regions, this has affected changes in global distribution systems. according to coe et al. (2004), preferred locations for building large distribution centers became gateways and corridors with access to traditional trade gateways and to large consumer markets. based on this notion of the importance of location, these scholars highlighted the importance of enabling competitive logistics services at low rates. fernandes and rodrigues (2009) also argued that staying competitive for companies implies a strategy by which parts of the value chain are in countries where they can take advantage of lower costs due to location factors. at the same time, according to these scholars, companies search for multimodal hubs to optimize the cost efficiencies of sea freight with that of quicker but expensive air freight. stojanović and puška/decis. mak. appl. manag. eng. 4 (1) (2021) 174-193 178 martí et al. (2017) highlighted that international trade has been affected by increased competitiveness of lagged regions that in the past did not play such an important role in the world. thus, they believe that only those countries prepared to implement the advances that commercial globalization requires can benefit from improved logistics performance. according to chow et al. (1994), measurement of performance must recognize the role of an organization in a supply chain. önden et al. (2018) pointed out that logistics performance is an accelerator of the competitiveness of a country and thus, they need to evaluate their position using various indicators including logistics performance index (lpi). memedovic et al. (2008) indicated the usefulness of lpi as a composite index which shows that building the logistics capacity to connect firms, suppliers and consumers is even more important today than costs. thus, within logistics performance analysis for gcc countries we will use lpi data. biswas and anand (2020) performed a very interesting comparative analysis of the g7 and brics countries on the basis of logistical competitiveness, and they expanded the criteria by using the adoption of information and communication technologies and co2 intensity in addition to the lpi criteria. a very good insight in the literature dealing with the issue of selecting the best location of the logistics center was given by uyanik et al. (2018) who analyzed 35 different studies with the location selection problem. they found that different methods were applied by different authors, but what they found as common ground across these studies is that the selection decision was based on a different number of criteria by using multi-criteria decision-making models. kuo (2011) used ten selection criteria including port rate, import/export volume, location resistance, extension transportation convenience, transshipment time, one stop service, information abilities, port & warehouse facilities, port operation system, and density of shipping line, while ou and chou (2009) used six factors named valued added service, transportation and distribution systems, market potential, environment, infrastructure and culture to identify international distribution center from a foreign market perspective. elevli et al. (2014) believed that decision makers for selecting locations of logistic centers prefer to pursue more than one goal or consider more than one factor. this is where the justification for use of multi-criteria decision analysis with fuzzy logic lies. very significant studies can be found in the literature that deal with the problem of selection of logistics centers using multi-criteria decision analysis with fuzzy logic. kishore and padmanabhan (2016) argued that the fuzzy approach is capable of capturing vagueness associated with subjective perception of decision makers. li et al. (2011) analyzed among 15 regional logistics center cities and thirteen criteria to identify logistics center location, and they used linguistic variables instead of numerical values in this study applying fuzzy-set theory. these scholars believed that linguistic variables are more appropriate when performance values cannot be expressed with numerical values. elevli (2014) used fuzzy preference ranking organization method for enrichment evaluation. this method combined the concept of fuzzy sets to represent uncertain information with the promethee. kazançoğlu et al. (2019) applied sustainability benchmarking principles by using hybrid multicriteria decision-making method, fuzzy ahp and promethee methods in the selection process. sun et al. (2019) explored location problems in a three-stage logistics network that consists of suppliers, logistics centers, and customers and they put focus on the environmental sustainability. for their study, they applied two fuzzy mixed integer linear programming models. phamb et al. (2017) developed a benchmarking framework for selection of logistics centers by applying a hybrid of the logistics performances of gulf cooperation council’s countries in global supply chains 179 fuzzy method and the technique for order of preference by similarity to ideal solution (topsis). they found that freight demand, closeness to market, production area, customers, and transportation costs are most important factors for selection. biswas and anand (2020) applied the piv (proximity indexed value) method and the topsis method to perform a comparative analysis of the g7 and brics countries. with their study, wang et al. (2010) put their focus in selection of locations that maximize profits and minimize costs. they established a fuzzy multiple criteria decision-making model based on fuzzy ahp for the ldc assessment. few years later, wang et al. (2014) focused on the consistency and the historical assessments accuracy by introducing priority of consistency and historical assessments accuracy mechanism into a fuzzy multi-criteria decision making approach. focusing on several criteria, such as proximities to highway, railway, airports, and seaports; volume of international trade; total population; and handling capabilities of the ports, önden et al. (2018) combined the fuzzy analytic hierarchy process, spatial statistics and analysis approaches to evaluate suitable level for logistics center. one of most interesting studies we found in the literature is delivered by stević et al. (2015) who searched for the best location of logistics centers throughout the state of different facts important for selection of the best location. they used the ahp method of multi-criteria analysis. our research problem is focused on analyzing logistics performance of gcc countries to identify which country can provide the best logistical conditions to make this region even stronger within global supply chains. 3. methodology the identification of the most suitable location for the logistic center in this study was conducted at the first step with the analysis of logistic performances of selected countries. in this study, the identification of logistic center location was performed using a hybrid multi-criteria approach based on the critic and mabac methods. the selected countries that were examined under this study included 6 countries from the gulf cooperation council: bahrain, kuwait, oman, qatar, saudi arabia, and the united arab emirates (uae). to assess logistical performances of selected gcc countries, we used relevant data from the logistics performance index (lpi) developed by the world bank. based on the lpi, the following indicators were taken into consideration: customs, infrastructure, services, timeliness, tracking and tracing and international shipments (table 1). during the research, the following steps were conducted: 1. data collection 2. forming of decision matrix 3. determining weights for criteria 4. ranking of gcc countries 5. analysis of the results stojanović and puška/decis. mak. appl. manag. eng. 4 (1) (2021) 174-193 180 table 1. core components of lpi component definition c1 customs the efficiency of customs and border management clearing. c2 infrastructure the quality of trade and transport infrastructure. c3 services the competence and quality of logistics services. c4 timeliness the frequency with which shipments reach consignees within expected delivery times. c5 tracking and tracing the ability to track and trace consignments. c6 international shipments the ease of arranging competitively priced shipments source: world bank the first step of this research was data collection. for this study we collected data from the logistic performance indicators (lpi) from the world bank available from the website: lpi.worldbank.org. to obtain the most complete data on lpi trends for gcc countries, data for the years 2012, 2014, 2016 and 2018 were used. after the data were collected, a decision matrix is formed for the specified periods and selected countries. having in mind that we used data from 4 different periods, we formed five decision matrices to enable further analysis. these decision matrices are the basis for implementing methods for multi-criteria data analysis. the next step was to determine the importance of the used criteria. before we made rankings of selected countries, it was necessary to determine the importance of the criteria. in order to eliminate subjectivity in ranking of selected countries, the critic method was used to determine the weights of the criteria. the critic method has an approach to objectively calculate weight values based on standard deviation values and correlation coefficients. since we used data for four time periods, the weights for each of these decision matrices were calculated and reconciled. adjustment was done by applying the average value of the weights. these average values were used to rank gcc countries in terms of logistic center selection. after the initial decision matrix was formed and the weights of the criteria were calculated, the gcc countries were ranked in relation to the lpi. ranking was done using the mabac method. the results in previous studies obtained using the mabac method have shown that this method can be used as a support in decision making (božanić, et al., 2016) and in the ranking of alternatives (pamučar and ćirović, 2015). first, the data were normalized, then the normalized decision matrix was weighted, and the determination of the approximate border area matrix was calculated. following these steps, the alternatives were placed in relation to the value of the approximate border area, following with ranking of the alternatives. more details about the critic and mabac method are shown below. the ranking of alternatives was done for all selected time periods. after the selected countries were ranked, it was necessary to analyze the research results. the analysis of research results was applied in two ways. first, the results obtained by the mabac method were analyzed and compared with the results obtained by applying other methods of multi-criteria analysis. after confirming the results obtained by the mabac method, a sensitivity analysis was conducted. sensitivity analysis examines the extent to which a criterion has an impact on the ranking of alternatives. sensitivity analysis and comparison of results was performed for all time periods to get a complete insight of the lpi performance of gcc countries. logistics performances of gulf cooperation council’s countries in global supply chains 181 3.1. critic method the critic method was developed by diakoulaki, et al. (1995). this method serves to determine the objective values of the criteria weight, which includes the intensity of contrast and conflict that is contained in the structure of the decision problem (puška, et al., 2018). to determine the contrast of criteria, the standard deviations of the standardized values of the variants per column are used, as well as the correlation coefficients of all pairs of columns. the steps in implementing the critic method are as follows: step 1. defuzzification of the initial decision matrix. before the other steps of the critic method are performed, fuzzy numbers need to be converted to numerical values (kiani mavi, et al., 2016). defuzzied is performed using the following expression:     321 4 6 1~ mxmmmp  (1) where m1 is the first value of fuzzy number, m2 is the second value of fuzzy number and m3 is the third value of fuzzy number. step 2. normalization of the defuzzied initial decision matrix using the following expressions: for criteria to be maximized: (2) for criteria to be minimized: (3) where: x*j – the maximum value of the feature for a given criterion, x**j – the minimum value of the feature for a given criterion. step 3. calculation of the values of the standard deviation and the symmetric linear correlation matrix of all pairs per column. step 4. determination of the amount of information using the following expression. ,m j rc m k jkjj 1 )1( 1     (4) where j  standard deviation of criteria and jk r correlation coefficient for criteria. step 5. calculation of the final values using the following expression: 1 j j m j j c w c    (5) 3.2. mabac method the mabac method was developed by pamučar and ćirović (2015). the basic assumption of the mabac method is reflected in the definition of the distance of the alternative from the boundary approximate domain. the boundary approximate area represents the average value for all alternatives. if the alternative is above that value, its value will be positive and vice versa. the mabac method consists of the several steps. step 1. construct the initial decision matrix. as a first step, m alternatives are evaluated according to n criteria. the alternatives are represented with vectors ai = xi1, xi2,..., xin , where xij is the value of i alternative by j criterion (i  1,2,...,m; j = 1,2,...,n). *** ** jj rij ij xx xx r    *** ** 1 jj rij ij xx xx r    stojanović and puška/decis. mak. appl. manag. eng. 4 (1) (2021) 174-193 182 step 2. normalization of the elements of the initial matrix. the elements of the initial decision matrix are normalized by the following expressions: for benefit-type criteria: 𝑡𝑖𝑗 = 𝑥𝑖𝑗−𝑥𝑖 − 𝑥𝑖 +−𝑥𝑖 − (6) for cost-type criteria: 𝑡𝑖𝑗 = 𝑥𝑖𝑗−𝑥𝑖 + 𝑥𝑖 −−𝑥𝑖 + (7) where is 𝑥𝑖 − represents minimum values of the left distribution offuzzy numbers of the observed criterion by alternatives, and 𝑥𝑖 + represent the maximum values of the right distribution of fuzzy numbers of the observed criterion by alternatives. step 3. calculation of the weighted matrix (v) elements (božanić, et al, 2019). �̃�𝑖𝑗 = 𝑤𝑖 ∙ �̃�𝑖𝑗 + 𝑤𝑖 (8) where 𝑤𝑖 represents the weighted coefficients of the criterion. step 4. determination of the approximate border area matrix (g). 𝑔 = (∏ �̃�𝑖𝑗 𝑚 𝑗=1 ) 1/𝑚 (9) where m represents total number of alternatives step 5. calculation of the matrix elements of alternatives distance from the border approximate area. the distance of the alternatives from the border approximate area (�̃�𝑖𝑗 ) is defined as the difference between the weighted matrix elements (v) and the values of the border approximate areas (g). �̃� = �̃� − �̃� (10) now, border approximation area value for each criteria function serves as reference point/benchmark value for criteria-wise performance of an alternative 𝐴𝑖 . each individual candidate will belong to three different areas namely, the border approximation area (𝐺), upper approximation area (𝐺 +), and lower approximation area (𝐺 −). the ideal alternative (𝐴𝑖 +) can be found in the upper approximation area (𝐺 +) whereas the lower approximation area (𝐺 −) contains the anti-ideal alternative (𝐴𝑖 −) (božanić, et al., 2016). �̃�𝑖 ∈ { �̃� + 𝑖𝑓 �̃�𝑖𝑗 > 0 𝐺 ̃𝑖𝑓 �̃�𝑖𝑗 = 0 �̃� − 𝑖𝑓 �̃�𝑖𝑗 < 0 (11) for an alternative 𝐴𝑖 to be chosen as the best from the set, it is necessary for it to belong, by as many as possible criteria, to the upper approximate area (�̃� +). the higher the value �̃�𝑖 ∈ �̃� + indicates that the alternative is closer to the ideal alternative, while the lower the value q �̃�𝑖 ∈ �̃� − indicates that the alternative is closer to the anti-ideal alternative. step 6 ranking of alternatives. the calculation of the values of the criteria functions by alternatives is obtained as the sum of the distance of alternatives from the border approximate areas (�̃�𝑖 ). by summing up the matrix �̃� elements per rows, the final values of the criteria function of alternatives are obtained �̃�𝑖 = ∑ �̃�𝑖𝑗 , 𝑗 = 1,2, … , 𝑛, 𝑖 = 1,2, … , 𝑚 𝑛 𝑗=1 (12) 4. results before we ranked gcc countries according to lpi indicators, it was necessary to form an initial decision matrix. the initial decision matrix is presented in table 2. the logistics performances of gulf cooperation council’s countries in global supply chains 183 lpi data for gcc countries are comparable from the most recent data to the data from previous years. after the initial decision matrices for the observed time periods have been formed (table 2), the steps of the critic and mabac methods were performed. the example of data from 2018 explains the way in which gcc countries are ranked. table 2. lpi indicators for gcc countries in the period 2010-2018 2018 2016 country c1 c2 c3 c4 c5 c6 c1 c2 c3 c4 c5 c6 bahrain 2.67 2.72 3.02 2.86 3.01 3.29 3.14 3.10 3.33 3.38 3.32 3.58 kuwait 2.73 3.02 2.63 2.80 2.66 3.37 2.83 2.92 3.62 2.79 3.16 3.51 oman 2.87 3.16 3.30 3.05 2.97 3.80 2.76 3.44 3.35 3.26 3.09 3.50 qatar 3.00 3.38 3.75 3.42 3.56 3.70 3.55 3.57 3.58 3.54 3.50 3.83 saudi arabia 2.66 3.11 2.99 2.86 3.17 3.30 2.69 3.24 3.23 3.00 3.25 3.53 uae 3.63 4.02 3.85 3.92 3.96 4.38 3.84 4.07 3.89 3.82 3.91 4.13 2014 2012 country c1 c2 c3 c4 c5 c6 c1 c2 c3 c4 c5 c6 bahrain 3.29 3.04 3.04 3.04 3.29 2.80 2.67 3.08 2.83 2.94 3.42 3.42 kuwait 2.69 3.16 2.76 2.96 3.16 3.39 2.73 2.82 2.68 2.68 2.98 3.11 oman 2.63 2.88 3.41 2.84 2.84 3.29 3.10 2.96 2.78 2.73 2.59 3.17 qatar 3.21 3.44 3.55 3.55 3.47 3.87 3.12 3.23 2.88 3.25 3.50 4.00 saudi arabia 2.86 3.34 2.93 3.11 3.15 3.55 2.79 3.22 3.10 2.99 3.21 3.76 uae 3.42 3.70 3.20 3.50 3.57 3.92 3.61 3.84 3.59 3.74 3.81 4.10 since the critic and mabac methods use the same data normalization, the first step is the same for both methods and represents the normalization of the initial decision matrix (table 3). all criteria are of benefit type and expression 2 or 6 is used. after this step, the specific steps of the critic and the mabac methods are applied. since it is necessary to calculate the weights of the criteria at the first place, the steps in the critic method are explained first (table 4). after the normalization, the values of standard deviation and correlation coefficient are calculated. after that step, the amount of information and the weights of the criteria are determined. table 3. normalized decision matrix for lpi 2018 c1 c2 c3 c4 c5 c6 bahrain 0.01 0.00 0.32 0.05 0.27 0.00 kuwait 0.07 0.23 0.00 0.00 0.00 0.07 oman 0.22 0.34 0.55 0.22 0.24 0.47 qatar 0.35 0.51 0.92 0.55 0.69 0.38 saudi arabia 0.00 0.30 0.30 0.05 0.39 0.01 uae 1.00 1.00 1.00 1.00 1.00 1.00 stojanović and puška/decis. mak. appl. manag. eng. 4 (1) (2021) 174-193 184 table 4. steps in critic method c1 c2 c3 c4 c5 c6 st.dev 0.380 0.339 0.387 0.393 0.358 0.387 c1 c2 c3 c4 c5 c6 correlation 1.000 0.955 0.811 0.972 0.858 0.965 0.955 1.000 0.792 0.942 0.861 0.925 0.811 0.792 1.000 0.915 0.917 0.822 0.972 0.942 0.915 1.000 0.935 0.933 0.858 0.861 0.917 0.935 1.000 0.784 0.965 0.925 0.822 0.933 0.784 1.000 c1 c2 c3 c4 c5 c6 0.167 0.178 0.288 0.119 0.231 0.221 c1 c2 c3 c4 c5 c6 w 0.138 0.148 0.240 0.099 0.192 0.183 the same procedure is performed for other time periods, and individual weights are determined. to obtain one weight for the observed periods, the average weights are calculated. based on the obtained weights, one can conclude that criterion c3 has the highest importance (w = 0.238), while criterion c4 has the least importance (w = 0.120). table 5. weights of criteria by observed time periods c1 c2 c3 c4 c5 c6 2012 0.265 0.091 0.147 0.090 0.215 0.191 2014 0.189 0.124 0.285 0.103 0.124 0.174 2016 0.136 0.179 0.279 0.187 0.119 0.101 2018 0.138 0.148 0.240 0.099 0.192 0.183 w 0.182 0.136 0.238 0.120 0.163 0.162 after the weights have been calculated, the steps of the mabac method are applied and the gcc countries are ranked according to the lpi indicators. after the initial decision matrices are normalized (table 3), this matrix is aggravated (expression 8). after this, the average value of the criteria is calculated, which represents the expression of determination of the approximate border area matrix. the geometric mean is used here. the next step is focused to determine the distance of the alternatives from the arithmetic mean (table 6) and to calculate the sum of these values. based on the value, the ranking of alternatives is determined. the best alternative is the one that has the greatest value of �̃�𝑖 and vice versa. the obtained results have shown that the uae has the best lpi indicators, followed by qatar, while kuwait has the worst lpi indicators. in the same way, the calculation of the value of the mabac method and the ranking of orders for the observed time periods is performed.    m k jkjj rc 1 )1( logistics performances of gulf cooperation council’s countries in global supply chains 185 table 6. alternatives distance and the result of mabac method c1 c2 c3 c4 c5 c6 �̃�𝑖 rank bahrain -0.020 -0.036 -0.015 -0.013 -0.005 -0.028 -0.117 5 kuwait -0.009 -0.005 -0.091 -0.019 -0.049 -0.016 -0.188 6 oman 0.017 0.010 0.039 0.007 -0.010 0.048 0.113 3 qatar 0.042 0.033 0.127 0.047 0.064 0.034 0.346 2 saudi arabia -0.022 0.005 -0.021 -0.013 0.015 -0.026 -0.062 4 uae 0.160 0.100 0.146 0.100 0.114 0.135 0.755 1 the results have shown that the uae has the best lpi indicators for the entire observed time period, while kuwait has the worst lpi indicators for 3 years (2018, 2014, 2012), and saudi arabia has the worst lpi indicators in 2016. based on these findings, one can conclude that the uae has the best logistic indicators, thus suggesting that this country provides the best solution for establishing a logistical center in the gcc region. table 7. ranking of gcc countries using lpi indicators for the period 2012-2018 countries 2018 2016 2014 2012 �̃�𝑖 rank �̃�𝑖 rank �̃�𝑖 rank �̃�𝑖 rank bahrain -0.117 5 0.021 3 -0.003 4 0.003 4 kuwait -0.188 6 -0.063 4 -0.163 6 -0.198 6 oman 0.113 3 -0.072 5 -0.133 5 -0.118 5 qatar 0.346 2 0.350 2 0.479 2 0.264 2 saudi arabia -0.062 4 -0.140 6 0.004 3 0.149 3 uae 0.755 1 0.758 1 0.487 1 0.738 1 5. analysis of the results in order to confirm the results obtained using the combination of mabac and critic methods, the ranking of alternatives for all observed periods was performed using the methods: saw simple additive weighting technique, aras (additive ratio assessment), waspas (weighted aggregated sum product assessment), topsis (technique for order performance by similarity to ideal solution) and marcos (measurement alternatives and ranking according to the compromise solution). this represents the first step in analyzing the results. the second step is to examine the sensitivity analysis against the change in weight criteria. examination of the reliability of the results obtained by applying other methods showed that there is no deviation in the ranking of gcc countries according to lpi indicators. only for the indicators for 2014 there is a small deviation in the use of the topsis method. according to the results of this method, qatar has better results than the uae for this year. this result was to be expected because the results using the mabac method also showed that for this year there is a small difference between these two countries. based on the obtained results, it can be concluded that the results obtained by the mabac method are reliable and verified. stojanović and puška/decis. mak. appl. manag. eng. 4 (1) (2021) 174-193 186 figure 1. results of gcc countries ranking according to lpi indicators for the period 2012-2018 the second step of our results analysis was to conduct a sensitivity analysis. when conducting a sensitivity analysis, it is examined how the change in the weights of subcriteria affects the ranking order of the alternatives (puška et al., 2020). in accordance with this, scenarios were formed: the first scenario does not differentiate between criteria and gives the same importance to all criteria, the other scenarios give one of the criteria five times more importance compared to other criteria. since there are 6 criteria used in this study, 7 scenarios have been formed in order to perform the sensitivity analysis. table 8. scenarios in sensitivity analysis c1 c2 c3 c4 c5 c6 scenario 1 0.1667 0.1667 0.1667 0.1667 0.1667 0.1667 scenario 2 0.5000 0.1000 0.1000 0.1000 0.1000 0.1000 scenario 3 0.1000 0.5000 0.1000 0.1000 0.1000 0.1000 scenario 4 0.1000 0.1000 0.5000 0.1000 0.1000 0.1000 scenario 5 0.1000 0.1000 0.1000 0.5000 0.1000 0.1000 scenario 6 0.1000 0.1000 0.1000 0.1000 0.5000 0.1000 scenario 7 0.1000 0.1000 0.1000 0.1000 0.1000 0.5000 the sensitivity analysis has shown that the results for 2018 are the least sensitive to changes in the weight of the criteria. we found a change in rankings only in scenarios 3 and 7 in which kuwait showed better results compared to bahrain. this 0 2 4 6 bahrain kuwait oman qatar saudi arabia uae 2018 mabac saw aras waspas marcos topsis 0 2 4 6 bahrain kuwait oman qatar saudi arabia uae 2016 mabac saw aras waspas marcos topsis 0 1 2 3 4 5 6 bahrain kuwait oman qatar saudi arabia uae 2014 mabac saw aras waspas marcos topsis 0 1 2 3 4 5 6 bahrain kuwait oman qatar saudi arabia uae 2012 mabac saw aras waspas marcos topsis logistics performances of gulf cooperation council’s countries in global supply chains 187 can be justified with the fact that kuwait has better performances compared to bahrain in the infrastructure and international shipments indicators. the sensitivity analysis for 2016 has shown that the ranking does not change for the uae and qatar, while the ranking is changed for other countries. the highest oscillation we found for kuwait was due to the fifth place in three scenarios, the sixth place in three scenarios, and the third place in one scenario. saudi arabia also took the sixth place in 3 scenarios, and the fifth place in 3 scenarios. oman has ranked as sixth in one scenario. bahrain took the third place in five scenarios and the fourth place in two scenarios. figure 2. results of sensitivity analysis the sensitivity analysis for 2014 has shown the largest oscillations in the rankings. in almost all scenarios, the uae took first place, only in 2014 qatar took first place in two scenarios. the reason for this should be sought in the fact that qatar had better performances in services and timeliness indicators compared to the uae. oman was ranked at the last place in 5 scenarios while bahrain and kuwait were ranked at the last place in one scenario. the sensitivity analysis for 2012 has shown less oscillation in the rankings. saudi arabia had a better performance in services indicator compared to qatar and is ranked better in 4 scenarios. oman had a better performance of customs indicator compared to bahrain, and is ranked better ranked in scenario 2, while kuwait had better performance of tracking and tracing indicator compared to oman and it is ranked better in scenario 6. the sensitivity analysis has shown that the lpi indicators were the most conflicting in 2014 and 2016, while in 2018 they were the least conflicting. this caused the least change in the ranking in sensitivity analysis. this analysis has also shown that the uae 0 1 2 3 4 5 6 1 2 3 4 5 6 7 2018 bahrain kuwait oman qatar saudi arabia uae 0 1 2 3 4 5 6 1 2 3 4 5 6 7 2016 bahrain kuwait oman qatar saudi arabia uae 0 1 2 3 4 5 6 1 2 3 4 5 6 7 2014 bahrain kuwait oman qatar saudi arabia uae 0 1 2 3 4 5 6 1 2 3 4 5 6 7 2012 bahrain kuwait oman qatar saudi arabia uae stojanović and puška/decis. mak. appl. manag. eng. 4 (1) (2021) 174-193 188 has the best performances in lpi compared to other countries and it should be the first choice for a logistical center in this part of the world for trading and exporting goods globally. 6. discussion nowadays, it is fully accepted in theory and practice that logistics centers are extremely important for global supply chains. they represent a significant strategic tool in international trade to reduce costs, reduce duration of supply process, increase sustainability of supply, but also increase overall competitiveness. to achieve these mentioned benefits, decision-makers must find the best locations for logistics centers that will improve competitiveness in global supply chains. this leads to the conclusion that the evaluation of different locations is very important for the effectiveness of the logistics centers. we fully agree with the scholars who argue that selection of logistics center location should be based on the analysis of multiple criteria. uyanik et al. (2018) presented different approaches that can be found in the literature by which different criteria are used in the analysis of the best location for a logistics center. as these scholars found, there is no single approach in the literature about these criteria. in avoiding these challenges about which criteria should be used, we followed what was proposed by martí et al. (2017) who highlighted the benefits of the logistics performance index (lpi). we support their thesis that lpi can help countries to get to know their business partners and to understand what should be adjusted to stay competitive in the logistics sector. our findings have shown that the united arab emirates leads the gcc region when it comes to logistics performance. in this sense, this country can be perceived as the best choice for the location of a logistics center that will allow the gcc region to be connected into a single supply chain. several scholars indicated a highly developed and modern logistics infrastructure in this country (jacobs and hall,2007; memedovic et al., 2008; fernandes and rodrigues, 2009; ziadah, 2018). we are therefore fully convinced that this is one of the most important reasons why the uae is the best ranked with our study. as jacobs and hall (2007), we also believe that dubai is the most important transportation hub in the region. jebel ali port and port rashid, dubai airports, established free zones and regulatory reforms enabled dubai and the uae to become one of the most competitive transportation and logistics hubs in the gcc region. and development plans have yet to showcase new projects in dubai that will further strengthen their competitive position as a logistics hub. while we write this paper, dubai authorities announced full foreign share ownership of business in dubai that will directly influence many foreign companies to locate their business in dubai. thus, this will lead to higher demand for logistics services in dubai and the uae. furthermore, experience in logistics services makes an advantage when it comes to the position of the uae through logistics performance. when we point to experience, we mean the fact that dubai is home to world-renowned logistics and transportation companies such as dp world, emirates airlines jebel ali free zone (jafza), and dubai world central, that is also stressed out by fernandes and rodrigues (2009). in addition to the rapid progress of the united arab emirates in the field of logistics, other countries in the gcc region have made significant achievements. the gcc countries have learned about the importance of logistics and have started to build exceptional logistics infrastructure that gives this region the opportunity to become a logistics performances of gulf cooperation council’s countries in global supply chains 189 logistics hub for trade between east and west. however, certain overlaps accompanied by costly infrastructure investments may lead to some other regions in asia becoming more competitive in providing logistics services for global trade. therefore, the gcc countries region need to improve coordination in the governance process of regional supply chains and to maintain and improve its position in global trade of commodities in the long run. this paper points out the importance of regional cooperation between gcc countries. as coe at al. (2004) highlighted, we also support an integrated conceptual framework for ‘globalizing’ regional development. thus, to integrate gcc as a single supply chain into global trade, the gcc countries should work on global production networks and share regional assets, such as logistics centers. in this regard, we support durugbo et al. (2020) in their appeal to build long and profitable relationships with customers to replace traditionally fragmented approaches in the gcc region. the gulf cooperation council should have a special role in coordinating developmental activities of the region. the gcc administrative bodies should improve their governance capacity that is also indicated by dadush and falcao (2009) and create a joint development program for the region. the importance of joint development programs in the field of transport and logistics such as “new silk road” which should be the longest world road, or “traceca” which is an east-west transport corridor stretching from central asia to europe, was indicated by khassenovakaliyeva et al. (2017) as highly important developmental programs to increase competitiveness of some countries or regions. following similar patterns, the gcc can establish joint cooperation on the establishment of logistics centers that will serve in connecting the region into regional supply chains. dadush and falcao (2009) gave a very good proposal that gulf cooperation council (gcc) must work to improve logistics and reduce non-tariff barriers to trade. fernandes and rodrigues (2009) went even more deeply by proposing policy makers to aggressively pursue the monetary union in the gcc region that certainly can facilitate establishing logistics networks more easily the countries of the gcc region need to accept the fact of growing competitors in the field of logistics, such as singapore or some other asian countries pointed by fernandes and rodrigues (2009). thus, they should start to improve regional collaboration in the logistics chain. what some authors such as fernandes and rodrigues (2009) point out, refers to the need to carefully analyze what they called the logistics skill gap amongst the workforce, including high rents and costs of operation in dubai and the uae. despite the remarkable development of modern logistics infrastructure in this country, these issues need to be considered in order to maintain competitiveness. only in this way can this location maintain its long-term logistics performance compared to competitors. furthermore, some scholars such as sundarakani et al. (2012) favored adopting it solutions that can improve the effectiveness of logistics centers, but this should be followed with appropriate education of managers and employees to manage these systems in an efficient way. what we also noticed is that locations that allow multimodal access to transport and logistics are much better positioned in the context of logistics performance. this has already been discussed by some authors (fernandes and rodrigues, 2009; uyanik et al. 2018; kazançoğlu et al. 2019). we have to agree with fernandes and rodrigues (2009) who indicated that dubai represents an excellent world class integrated hub. this is where we find the main justification for the excellent results that dubai and the united arab emirates have achieved through this study. we must not forget certain strategic issues. the countries of the gcc region, as stated by memedovic et al. (2008) are still more focused on export of oil and similar energy resources. therefore, we stojanović and puška/decis. mak. appl. manag. eng. 4 (1) (2021) 174-193 190 support the strategic directions of individual countries, including the united arab emirates, to put focus on other sectors and to develop their logistics capacities. we must not forget the historical fact that the cities located on the main transport routes developed the most. it is noticeable that the countries of the gcc region, especially the united arab emirates, accept this fact and take big steps in the development of logistics infrastructure. but, as pham et al. (2017) suggested, this should be done in a systematic way by having master plans for the development of a logistics center system that will improve the practice of adequate selection and prioritizing of locations adequate for logistics centers. a crucial role at the regional level in the development of these plans and their coordination should be taken by the administrative bodies of the gulf cooperation council. 7. conclusions the selection of an adequate location for a logistics center is one of the most important issues in the field of logistics operations management. in the literature, location selection is largely based on multi-criteria decision models. in our study we used data from the logistics performance index developed by the world bank and applied a hybrid multi-criteria approach based on the critic and mabac methods. among the six gcc countries, we found that the united arab emirates are the best ranked in the observed period from 2012 to 2018. the exceptional logistics infrastructure built in this country certainly contributes to this result. in fact, the entire gcc region is taking big steps in the development of logistics infrastructure. our study showed that kuwait achieves poorer logistics performance compared to other countries in the region. some countries during the observed period had certain oscillations in the movement of logistics performance, such as saudi arabia. this study recommends that, based on logistics performance, the united arab emirates represent a country that can provide the best conditions for location of a regional logistics center that can connect the gcc region more efficiently into global supply chains. the significance of this study can be found in the study findings that indicates to gcc countries which areas should be improved to elevate overall logistics performance. this study did not deal with a detailed analysis of the structure of exports by countries and product types. therefore, transport and other relevant costs related to the inclusion of gcc countries in global supply chains through a single logistics center were not considered, which is one of the limitations of this study. furthermore, this study focused on the logistics performances of individual gcc countries to find which country provides the best logistical conditions but did not search for suitable locations for logistics centers in these countries. based on these limitations of the study, new areas are opened for future very interesting research endeavors. in future research, it can be examined the role of lpi in the competitiveness of individual countries and determined how important a particular lpi criterion is for the competitiveness. focus of future research can be also on other criteria and making decisions not with lpi only. however, the aim of this paper was to examine the trend of lpi for gcc countries to perceive which location provide best logistics performance for establishing logistics center. in addition, it is possible to consider other fuzzy methods when determining logistics centers and to establish hybrid methods. this research provides basic postulates for determining the location for logistics centers. logistics performances of gulf cooperation council’s countries in global supply chains 191 author contributions: each author has participated and contributed sufficiently to take public responsibility for appropriate portions of the content. funding: this research received no external funding. conflicts of interest: the authors declare that there is no conflict of interest. references akkermans, h., bogerd, p., & vos, b. (1999). virtuous and vicious cycles on the road towards international supply chain management. international journal of operations & production management. biswas, s., & anand, o. p. (2020). logistics competitiveness index-based comparison of brics and g7 countries: an integrated psi-piv approach. iup journal of supply chain management, 17(2), 32-57. božanić, d., tešić, d., & kočić, j. (2019). multi-criteria fucom – fuzzy mabac model for the selection of location for construction of single-span bailey bridge. decision making: applications in management and engineering, 2(1), 132-146. božanić, d.i., pamučar, d.s., & karović, s.m. (2016). application the mabac method in support of decision-making on the use of force in a defensive operation. tehnika, 71(1), 129-136. chow, g., heaver, t. d., & henriksson, l. e. (1994). logistics performance, international journal of physical distribution & logistics management. christopher, m. i. (2017). logistics & supply chain management. fourth edition, prentice hall coe, n., hess, m., yeung, h., dicken, p. & henderson, j. (2004). globalizing regional development: a global production networks perspective. transactions of the institute of british geographers, 29(4), 468-484. dadush, u., & falcao, l. (2009). regional arrangements in the arabian gulf. universitäts-und landesbibliothek sachsen-anhalt. diakoulaki, d., mavrotas, g., & papayannakis, l. (1995). determining objective weights in multiple criteria problems: the critic method. computers & operations research, 22(7), 763-770. durugbo, c. m., amoudi, o., al-balushi, z., & anouze, a. l. (2020). wisdom from arabian networks: a review and theory of regional supply chain management. production planning & control, 1-17. elevli, b. (2014). logistics freight center locations decision by using fuzzypromethee. transport, 29(4), 412-418. fernandes, c., & rodrigues, g. (2009). dubai's potential as an integrated logistics hub. journal of applied business research (jabr), 25(3). ibrahim, h. w., zailani, s., & tan, k. c. (2015). a content analysis of global supply chain research. benchmarking: an international journal. stojanović and puška/decis. mak. appl. manag. eng. 4 (1) (2021) 174-193 192 kazançoğlu, y., özbiltekin, m., & özkan-özen, y. d. (2019). sustainability benchmarking for logistics center location decision. management of environmental quality: an international journal. khassenova-kaliyeva, a. b., nurlanova, n. k., & myrzakhmetova, a. m. (2017). central asia as a transcontinental transport bridge based on the transport and logistic system of the countries of this region. international journal of economic research, 14(7), 365382. kishore, p., & padmanabhan, g. (2016). an integrated approach of fuzzy ahp and fuzzy topsis to select logistics service provider. journal for manufacturing science and production, 16(1), 51-59. klassen. r. d. & whybark, d. c. (1994), ``barriers to the management of international operations'', journal of operations management, vol. 11, pp. 385-96. kuo, m.s. (2011). optimal location selection for an international distribution center by using a new hybrid method. expert systems with applications, 38(6), 7208-7221. larson, p. d., & halldorsson, a. (2004). logistics versus supply chain management: an international survey. international journal of logistics: research and applications, 7(1), 17-31 li, y., liu, x., & chen, y. (2011). selection of logistics center location using axiomatic fuzzy set and topsis methodology in logistics management. expert systems with applications, 38(6), 7901-7908. martí, l., martín, j. c., & puertas, r. (2017). a dea-logistics performance index. journal of applied economics, 20(1), 169-192. meijboom, b.r. & vos, b. (1997). international manufacturing and location decisions: balancing configuration and co-ordination. international journal of operations & production management, 17(7), 790-805. memedovic, o., ojala, l., rodrigue, j. p., & naula, t. (2008). fuelling the global value chains: what role for logistics capabilities? international journal of technological learning, innovation and development, 1(3), 353-374. onden, i., acar, a. z. & eldemir, f. (2016). evaluation of the logistics center location using a multi-criteria spatial approach. transport, 33(2), 322-334 ou, c. w., & chou, s. y. (2009). international distribution center selection from a foreign market perspective using a weighted fuzzy factor rating system. expert systems with applications, 36(2), 1773-1782 pamučar d., & ćirović g. (2015). the selection of transport and handling resources in logistics centers using multi-attributive border approximation area comparison (mabac). expert systems with applications, 42(6), 3016-3028. pham, t. y., ma, h. m., & yeo, g. t. (2017). application of fuzzy delphi topsis to locate logistics centers in vietnam: the logisticians’ perspective. the asian journal of shipping and logistics, 33(4), 211-219. puška, a., beganović, a., & šadić, s. (2018). model for investment decision making by applying the multi-criteria analysis method. serbian journal of management, 13(1), 728. logistics performances of gulf cooperation council’s countries in global supply chains 193 puška, a., stojanović, i., maksimović, a., & osmanović, n. (2020). evaluation software of project management used measurement of alternatives and ranking according to compromise solution (marcos) method. operational research in engineering sciences: theory and applications, 3(1), 89-102. rao, c., goh, m., zhao, y. & zheng, j. (2015). location selection of city logistics centers under sustainability. transportation research part d: transport and environment, 36, 29-44. reyes, p., raisinghani, m. s., & singh, m. (2002). global supply chain management in the telecommunications industry: the role of information technology in integration of supply chain entities. journal of global information technology management, 5(2), 4867. scully, j., & fawcett, s. e. (1993). comparative logistics and production costs for global manufacturing strategy. international journal of operations & production management, 13(12), 62-78. stević, ž., vesković, s., vasiljević, m., & tepić, g. (2015). the selection of the logistics center location using ahp method. in 2nd logistics international conference, 86-91. sun, y., lu, y., & zhang, c. (2019). fuzzy linear programming models for a green logistics center location and allocation problem under mixed uncertainties based on different carbon dioxide emission reduction methods. sustainability, 11(22), 6448. sundarakani, b., tan, a. w. k., & over, d. v. (2012). enhancing the supply chain management performance using information technology: some evidence from uae companies. international journal of logistics systems and management, 11(3), 306324. uyanik, c., tuzkaya, g., & oğuztimur, s. (2018). a literature survey on logistics centers' location selection problem. sigma: journal of engineering & natural sciences/mühendislik ve fen bilimleri dergisi, 36(1), 141-160. wang, b., xiong, h., & jiang, c. (2014). a multicriteria decision making approach based on fuzzy theory and credibility mechanism for logistics center location selection. the scientific world journal, 2014, article id 347619 wang, m. h., lee, h. s., & chu, c. w. (2010). evaluation of logistic distribution center selection using the fuzzy mcdm approach. international journal of innovative computing, information and control, 6(12), 5785-5796. zaralı, f., & yazgan, h. r. (2016). solution of logistics center selection problem using the axiomatic design method. world academy of science, engineering and technology, international journal of computer, electrical, 10(3), 547-553. zavadskas, e.k., stević, ž., turskis, z., & tomašević m., (2019). a novel extended edas in minkowski space (edas-m) method for evaluating autonomous vehicles. studies in informatics and control, 28(3), 255-264. ziadah, r. (2018). constructing a logistics space: perspectives from the gulf cooperation council. environment and planning d: society and space, 36(4), 666-682. © 2018 by the authors. submitted for possible open access publication under the terms and conditions of the creative commons attribution (cc by) license (http://creativecommons.org/licenses/by/4.0/). plane thermoelastic waves in infinite half-space caused decision making: applications in management and engineering vol. 3, issue 2, 2020, pp. 49-69 issn: 2560-6018 eissn: 2620-0104 doi: https://doi.org/10.31181/dmame2003049c * corresponding author. e-mail addresses: s_chakraborty00@yahoo.co.in (s. chakraborty), sohinirits@gmail.com (r. chattopadhyay), santonabchakraborty@gmail.com (s. chakraborty), an integrated d-marcos method for supplier selection in an iron and steel industry ritwika chattopadhyay 1, santonab chakraborty 2, shankar chakraborty 1* 1 department of production engineering, jadavpur university, kolkata, west bengal, india 2 industrial engineering and management department, maulana abul kalam azad university of technology, west bengal, india received: 2 june 2020; accepted: 30 august 2020; available online: 30 august 2020. original scientific paper abstract: the modern era of manufacturing has recognized the importance of a sustainable supply chain management (scm) system in order to attain the desired level of stability and productivity for fulfillment of the customers’ requirements. selection of the most suitable set of suppliers is an integral part of scm which can be effectively solved with the deployment of different multicriteria decision making (mcdm) techniques. this paper endeavors to resolve the uncertainty involved in the decision making process for supplier selection with the application of d numbers. a relatively new mcdm technique in the form of measurement alternatives and ranking according to compromise solution (marcos) is later employed for ranking of a set of competing suppliers. this integrated approach is finally applied to choose the best performing supplier in a leading indian iron and steel making industry based on seven selective evaluation criteria and opinions of three decision makers. it would provide more generic and unbiased results while addressing uncertainty and ambiguity involved in the supplier selection process. key words: supplier selection; d numbers; marcos; iron and steel industry. 1. introduction in the modern day highly competitive manufacturing environment, a sustainable supply chain management (scm) system has been recognized as one of the predominant issues for survival and long-term prosperity of any organization. a sustainable scm system ensures supply of the best quality products at reduced costs to the customers, hence helping a manufacturing organization capturing its superior chattopadhyay et al./decis. mak. appl. manag. eng. 3 (2) (2020) 49-69 50 position over its competitors in the market. in case of expensive products, it focuses on quick delivery in order to minimize inventory and associated holding cost. thus, an efficient sc should take care of a wide range of objectives keeping in mind the welfares of both the organization and its customers. the present day manufacturing organizations should focus on devising a reliable as well as flexible sc based on a proper opinionated research. in sc, an essential responsibility bestowed on the purchasing department is to identify a set of compatible suppliers based on their capabilities to fulfill the primary requirements of cost, quality, delivery, technological capability, production capacity, financial strength etc. thus, with the adoption and advancement of scm, supplier selection has started playing a pivotal role. the supplier selection process mainly focuses on the following tasks, i.e. a) identification of the products to be procured, b) assimilation of a list of potential suppliers, c) shortlisting of the key factors (criteria) based on which the suppliers need to be evaluated, d) formation of a team of experts/decision makers to extensively analyze and strategize this selection process, e) choosing of the most apposite supplier while disposing off the inefficient ones, and f) continuous performance evaluation of the finally sleected supplier (de boer et al., 2001). over the course of development, supplier selection process has undergone a gradual transition from an intuitionistic approach to a more tangible strategic one, hence characterizing its further complication (parkhi, 2015). it has already been well acknowledged that scs form the backbone of most of the manufacturing industries for selection of the reliable suppliers who can provide continuous stock of quality raw materials in order to fulfill the basic objectives of productivity and profitability with economic justification to the manufacturing processes. in supplier selection process, the main challenge and mathematical complexity lies in the identification of disparate evaluation criteria with varying degrees of importance, requiring a sensible trade-off amongst them. the manufacturing sector, heavily relying on scs to achieve its goals, finds strong dependence on the application of different multi-criteria decision making (mcdm) techniques to choose the best fit supplier from a pool of competing alternatives based on the shortlisted evaluation criteria. the mcdm has become interesting among the researcher community over a long time, whereby, it has come across innovative methodologies to help the decision makers to weigh multiple alternatives to choose the best option, while taking into account a set of conflicting qualitative and quantitative criteria. application of any of the mcdm techniques in supplier selection has two basic objectives, i.e. a) deriving the preferential weights (relative importance) of the considered criteria by evaluating one against the others, and b) ranking of the candidate suppliers based on the accumulative score with respect to each criterion. in this direction, an unlimited number of mcdm techniques, like analytic hierarchy process (ahp), technique of order preference similarity to the ideal solution (topsis), vikor (vlsekriterijumska optimizacija i kompromisno resenje), grey relational analysis (gra), preference ranking organization method for enrichment evaluation (promethee), combinative distance-based assessment (codas), weighted aggregated sum product assessment (waspas) etc. has been deployed for solving the supplier selection problems in diverse manufacturing industries. recently, stević et al. (2020) proposed a new mcdm tool, called measurement of alternatives and ranking according to compromise solution (marcos) involving ranking of the alternatives based on a compromised solution. in this approach, the ranking procedure is based on the distance of the alternatives from the ideal and anti-ideal solutions with respect to the considered criteria and their aggregated score reflected by a utility function. an integrated d-marcos method for supplier selection in an iron and steel industry 51 however, the biggest challenge in decision making lies in the underlying uncertainty of the decision makers while evaluating the alternatives with respect to a set of qualitative criteria based on some predefined benchmarks and linguistic judgements. a linguistic judgement cannot always be ascertained, especially when there is not a single decision maker, rather an entire team, introducing chances of biasness in the decision making process. in real life situations, it becomes difficult for the decision makers to ascertain a particular degree or rating to a specific criterion owing to their varied backgrounds and experiences. various mathematical tools, like fuzzy set theory, intutionistic fuzzy set etc. have already been employed to deal with the uncertainty and ambiguity involved in the supplier selection process. deng (2012) introduced another tool in the form of d numbers to successfully account for uncertainty involved in the decision making processes. this paper aims at addressing the issue of uncertainty involved in the supplier selection process when the concerned decision makers assign relative scores to the competing suppliers with respect to different evaluation criteria, which if ignored, may result in highly ambiguous results. though marcos method itself is a robust yet mathematically simple model, it still does not address the issue of uncertainty often involved in group decision making where the team of experts comes from different backgrounds and experiences. while there are alternatives, like fuzzy theory, dempster-shafer (d-s) theory etc. to deal with such uncertainty, they often have constraints, like elements in the frame of discernment should be mutually exclusive whereby the sum of the basic probability of mass function should be one. however, d numbers, free from such constraints, provide more generalized solutions. thus, combining d numbers with marcos gives a more holistic and impactful model covering the major loopholes involved in group decision making by accounting for uncertainties using a mathematically simpler formulation. this paper thus deals with implementation of the proposed methodology for supplier selection in a fully operational large scale iron and steel industry in india which has to compete with other stalwarts to carve its own position in the global market. the organization of this paper is as follows. section 2 provides a brief literature review on the applications of different mcdm techniques in supplier selection. section 3 deals with the mathematical details of d numbers and marcos method. section 4 illustrates the application of the propsoed methodology for supplier selection in an indian iron and steel industry. finally, conclusions are drawn in section 5 along with the future dierctions of research. 2. literature review the importance of supplier selection can be proved by the humungous extent of researches conducted based on the applications of various mcdm techniques under both certain and uncertain manufacturing environments. table 1 provides a concise list of different evaluation criteria and mathematical approaches considered for resolving supplier selection problems, with a special emphasis on steel making industries. it can be clearly noticed from table 1 that different mathematical techniques have mainly been employed for two purposes, i.e. a) determination of weights (relative importance) to be assigned to various evaluation criteria and b) ranking of the competing suppliers. the ahp, best-worst method etc. have been deployed for criteria weight estimation, while anp, topsis, gra, promethee, vikor, codas etc. have been augmented for supplier ranking. it is also noticed that some of those mcdm chattopadhyay et al./decis. mak. appl. manag. eng. 3 (2) (2020) 49-69 52 techniques have been combined with fuzzy set, intuitionistic fuzzy set, d-s theory etc. for providing more accurate solutions to supplier election problems dealing with qualitative information under group decision making environment. for effective supplier selection, the primary task is to shortlist the appropriate set of evaluation criteria. back in the 1960s, dickson (1966) stressed on the dependence of supplier selection on various evaluation criteria, while enlisting an exhaustive set of criteria. however, with rapid technological advancements and involvement of global economic parameters, a shift in the criteria for supplier selection has been observed. in the 1990s, the decision makers emphasized on the introduction of more qualitative criteria in the supplier selection process making it more complicated and prone to variation due to human involvement. stević (2017) performed a comprehensive review on various criteria and sub-criteria considered for dealing with the supplier selection problems. however, these specific sets of evaluation criteria vary from one manufacturing organization to another. with every organization thriving hard to develop the best sustainable scm system, importance of a perfect set of evaluation criteria cannot be thus ignored. based on the literature review, it is observed that maximum importance has been provided on price, delivery, quality and production capacity. the extravagant research shows the importance of supplier selection in manufacturing industries. however, there has been relatively less light reflected on the uncertainty involved in the decision making process due to expensive computational steps. for industries seeking a robust decision, it has now become mandatory that the adopted technique should be both exhaustive and efficient eradicating any chance of mistake. most of the past research works have weighed the participating decision makers equally, not accounting for their varied level of expertise and experience. those studies have also been based on the assumption that human preference can be linearly determined. in order to overcome the drawbacks of the previously adopted techniques, in this paper, a new approach for supplier selection integrating d numbers and marcos method is proposed. it is numerically easier to implement, yet provides more reliable ranking results, making it attractive for the manufacturing industries. finally, it is applied to an indian iron and steel making industry while considering the opinions of three experts/decision makers based on five alternative suppliers and seven evaluation criteria. 3. methods 3.1. d numbers the d numbers are an extension of the d-s theory, accounting for uncertainty of information. it can be defined as follows (deng, 2012; deng et al., 2014b): let ω be a finite non-empty set, d number is a mapping formulated by: ]1,0[: d (1) with   b φdbd 0)(and1)( (2) where ϕ is an empty set and b is a subset of ω. an integrated d-marcos method for supplier selection in an iron and steel industry 53 table 1. list of criteria and methods considered for supplier selection author(s) criteria method(s) tahriri et al. (2008) quality, delivery, direct cost, trust, financial position, management and organization ahp gnanasekaran et al. (2010) quality, delivery, cost, financial position, service fuzzy ahp liu (2010) price, delivery, quality, relationship, financial position normalization ying-tuo and yang (2011) quality of products, environmental friendship, price, development capability vague sets vimal et al. (2012) minimum quantity, maximum quantity, defective item, late delivery, product price, order quantity topsis parthiban et al. (2013) quality, delivery, productivity, service, cost, technological capability, application of conceptual manufacturing, environment management, human resource management, manufacturing challenges fuzzy logic, strengthweaknessopportunitythreat (swot) analysis, data envelopment analysis dargi et al. (2014) quality, price, production capacity, technical capability and facility, service and delivery, reputation, geographical location fuzzy analytic network process kar (2015a) product quality, delivery compliance, price, technological capability, production capability, financial strength, electronic transaction capability ahp, fuzzy set theory, neural network kar (2015b) product quality, delivery compliance, price, technological capability, production capability, financial strength, electronic transaction capability delphi method, fuzzy ahp kamath et al. (2016) quality, cost, delivery, vendor relationship management ahp abdulshahed et al. (2017) quality, direct cost, lead time, logistics service grey system theory azimifard et al. (2018) economic sustainability, environmental sustainability, social sustainability ahp, topsis badi et al. (2018) quality, direct cost, lead time, logistics service codas banaeian et al. (2018) service level, quality, price, environmental management system fuzzy topsis, fuzzy vikor, fuzzy gra jain and singh (2018) quality, delivery, performance history, cost ahp, waspas kumar et al. (2018) cost, delivery capability, quality, performance, reputation fuzzy topsis chattopadhyay et al./decis. mak. appl. manag. eng. 3 (2) (2020) 49-69 54 abdullah et al. (2019) cost of product, quality of product, service provided, ontime delivery, technology level, environmental management system, green packaging promethee jain and singh (2019) economic, environmental, social fuzzy modified kano model javad et al. (2020) collaborations, environmental investment and economic benefit, resource availability, green competency, environmental management initiative, research and design initiatives, green purchasing capability, regulatory obligations, pressures and market demand best worst method, fuzzy topsis jain and singh (2020) economic sustainability, environmental sustainability, social sustainability fuzzy interference system with fuzzy kano model the d numbers have the leverage over the d-s theory according to which all elements of set ω need to be mutually exclusive and ,1)(  b bd i.e. the information should be complete. however, d numbers are also capable of dealing with incomplete information, i.e. when 1)(  b bd . on considering a set ω = {b1, b2,…,bi,…,bn}, where rbi  and bi ≠ bj, d numbers can be represented as: d({b1}] = v1, d({b2}] = v2,…,d({bi}] = vi,…,d({bn}] = vn it can also be expressed as d = {(b1,v1), (b2,v2),…,(bi,vi),…,(bn,vn)} where vi > 0 and .1 1   n i iv there are certain properties which are important for performing different operations on d numbers. property 1: (permutation invariability) (deng et al., 2014a; 2014b) assuming two different d numbers, i.e. d1 = {(b1,v1),…,(bi,vi),…,(bn,vn)}and d2 = {(bn,vn),…,(bi,vi),…,(b1,v1)}, then 21 dd  . property 2: (deng, 2012; deng et al., 2014b). if d = {(b1,v1), (b2,v2),…,(bi,vi),…,(bn,vn)}, the integrated value of d can be defined as : i n i ivbdi    1 )( (3) property 3: (deng, 2012; deng et al., 2014a) assuming two different d numbers, d1 and d2 such that )},(),...,,(),...,,{( 11111 1 1 11 nnii vbvbvbd  and )},,(),...,,(),...,,{( 222221 2 12 mmjj vbvbvbd  the combination of d1 and d2 can be expressed as 21 ddd  which can be further defined as follows: d(b) = v (4) where an integrated d-marcos method for supplier selection in an iron and steel industry 55 2 21 ji bb b   (5) c vv v ji           2 21 (6) 1 2 1 1 1 2 1 2 1 1 1 1 2 1 2 1 1 1 1 2 1 2 1 2 1 2 1 1 1 1 , 2 , 2 2 , 2 2 , 2 2 2 2 i jm n j i i j c jm n m j i j i jm n n i c j i i i j c jm n m n i c c c j i j i v v v v v v c v v v v v v v v v v v v                                            (7) 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 and 1; 1 and 1; 1 and 1; 1 and 1; m n i ji j m n i ji j m n i ji j m n i ji j v v v v v v v v                         where 1 1 1 n c ii v v    and 2 2 1 1 m c jj v v    it is worthwhile to mention here that the combination operation is not associative in nature. hence, a further operation can be formulated to combine multiple d numbers. property 4: (deng et al., 2014a) if d1, d2,…,dn are n d numbers, µj is an order variable for each dj, indicated by the tuple  jj dμ , , then the function fd represents the combination operation of multiple d numbers, 1 21 2 ( , ,..., ) [...[ ] ... ] nd n f d d d d d d        (8) where 1λ d is equal to jd in the tuple  jj dμ , in which the value of jμ is the least. 3.2. marcos method it is a recently developed mcdm technique used for ranking of the candidate alternatives (stević et al., 2020). consideration of the reference ideal and anti-ideal solutions at the initial stages of analysis makes it advantageous over the other ranking techniques. in this method, each alternative receives a particular value of utility function depending on its relation with the ideal and anti-ideal solutions. preference is provided to those alternatives which are closest to the ideal solution and farthest from the anti-ideal solution. its computation starts with the formation of a decision chattopadhyay et al./decis. mak. appl. manag. eng. 3 (2) (2020) 49-69 56 matrix showing the performance of the alternatives with respect to different criteria. in this matrix, the ideal solution (having maximum values for benefit criteria and minimum values for cost criteria) and anti-ideal solution (with maximum values for cost criteria and minimum values for benefit criteria) are defined. the initial matrix is normalized with respect to the reference value and the corresponding weighted normalized matrix is derived by multiplying all the elements of the normalized matrix with the weight coefficients of the considered criteria. this matrix is finally employed to evaluate the utility degree for each of the alternatives based on which they are subsequently ranked. 3.3. d-marcos method it has already been mentioned that this paper deals with integration of d numbers with marcos method for selection of the most apposite supplier in an indian iron and steel making industry while taking into account the uncertainty prevalent in human judgement to make the decision more robust. for its successful implementation, a set of n criteria is recognized along with determination of their weights (relative importance) using a suitable criteria weight measurement technique. a versatile team of r experts is then formulated where each expert is assigned a weight λk > 0 (i = 1,2,…,r) such that 1 1   r i kλ based on his/her level of experience and expertise. the procedural steps of d-marcos method are presented as below: step 1: in this step, the evaluation matrices for all the participating experts are formulated. due to different backgrounds and variation in human judgements, there exists certain extent of uncertainty while evaluating the alternatives with respect to each of the criteria, which can be taken care of by the implementation of d numbers. for kth expert, the performance score assigned to ith alternative against jth criterion is represented by d number kijd . hence, the decision matrix with m alternatives and n criteria for kth expert is represented as below:                k mn k m k m k n kk k n kk k m k k k ddd ddd ddd a a a t      21 22221 11211 2 1 ' (9) step 2: the aggregated decision matrix for all the experts in the team is now computed based on the properties of d numbers, keeping in mind the weight assigned to each expert. if there are two matrices evaluated by experts e1 and e2: , 11 2 1 1 1 2 1 22 1 21 1 1 1 12 1 11 1 1 2 1 1 ' 1                mnmm n n m ddd ddd ddd a a a t                     22 2 2 1 2 2 2 22 2 21 2 1 2 12 2 11 2 2 2 2 1 ' 2 mnmm n n m ddd ddd ddd a a a t      then the aggregated decision matrix is presented as follows: 1 11 12 1 2 21 22 2 1 2 ' n n m m m mn a d d d a d d d t a d d d             (10) an integrated d-marcos method for supplier selection in an iron and steel industry 57 such that , 21 ijijij ddd  where mi 1 and nj 1 . for more than two experts in the decision making team, the aggregated decision matrix is developed using eq. (8). step 3: in order to rank the candidate alternatives applying marcos method, a consolidated m×n matrix is formulated, integrating each of the d numbers assigned to a particular alternative against each criterion.              mnmm n n m xxx xxx xxx a a a x      21 22221 11211 2 1 (11) where xij = i(dij). step 4: all the considered evaluation criteria are now grouped into two categories, i.e. benefit (larger-the-better) (represented by b) and cost (smaller-the-better) (denoted by c). step 5: the consolidated matrix is extended by defining two additional rows, indicating the ideal (ai) and anti-ideal (aai) solutions. the anti-ideal solution reflects the worst alternative, whereas, the ideal solution reflects the best possible alternative. n21 ccc                       anaa mnmm n n aanaaaa m xxx xxx xxx xxx xxx x        21 21 22221 11211 21 2 1 ai a a a aai (12) where cjxbjxaai ijiiji  if max and if min (13) cjxbjxai ijiiji  if min and if max (14) step 6: the x' matrix is then normalized to form another matrix n of (m + 2)×n dimension, i.e.   nmij nn   )2( , based on the following equations: if bj x x n ai ij ij  (for benefit criterion) (15) c if  j x x n ij ai ij (for cost criterion) (16) step 7: the final weighted matrix   nmij yy   )2( is obtained while multiplying the elements of the normalized matrix by the corresponding criteria weights. jijij wny  (17) where nij is an element of matrix n and wj is the weight assigned to jth criterion. step 8: the positive and negative degrees of utility for each alternative with respect to the ideal and anti-ideal solutions are respectively determined using the following equations: ai i i t t k  (18) chattopadhyay et al./decis. mak. appl. manag. eng. 3 (2) (2020) 49-69 58 aai i i t t k  (19) where )...,2,1( 1 miyt n j iji   (20) step 9: the utility function hence used to evaluate the compromise of each alternative with respect to the ideal and anti-ideal solutions can be defined as follows: )( )(1 )( )(1 1 )(            i i i i ii i kf kf kf kf kk kf (21) where the utility function with respect to the ideal and anti-ideal solutions can be respectively defined using the following equations:      ii i i kk k kf )( (22)      ii i i kk k kf )( (23) step 10: the final ranking order of the alternatives can be obtained while assigning the best rank to the alternative having the highest utility function value. 4. application of d-marcos method for supplier selection as mentioned earlier, this paper deals with the application of d-marcos method for selecting the most apposite supplier for an iron and steel making industry. the steel industry being considered here is located in an industrial town of west bengal, india and procures the requisite materials from various organizations across the globe. it is a leading producer of steel with annual production of around 2.4 million tonnes of crude steel. it came into existence in the year of 1959 and has been growing ever since. although some of its primary raw materials are arranged from its own captive mines or from the parent organization, there are a lot of other materials need to be acquired from other suppliers. it is a gigantic unit which houses a large number of equipments and machineries, requiring huge indenting volume. apart from the semi-finished products, its product basket consists of structural, merchant and railway items. in this plant, there is large number of furnaces and reheating units continuously in action, involving huge refractory consumption. these refractory materials are mostly procured from the external suppliers. this unit needs to be managed to stand the test of time while satisfying its clients across the globe. the importance of sc in such a big unit thus cannot be ignored. there is a dedicated team continuously working to evaluate its wide range of suppliers and choosing the most eligible ones. based on the humongous set of criteria available in the literature for iron and steel industry (kar, 2015b), seven most important criteria are shortlisted for evaluation of the competing suppliers engaged in supply of refractory materials to the considered plant. table 2 provides the list of those criteria which are again weighed by the participating experts using the best-worst method (rezaei, 2015). it is worthwhile to mention here that amongst the criteria, delivery compliance (c2) and price (c3) are the cost criteria always preferred with their minimum values. it is also noticed from table an integrated d-marcos method for supplier selection in an iron and steel industry 59 2 that product quality (c1) and delivery compliance (c2) are the two most important criteria for this supplier selection, whereas, electronic transaction capability (c7) is the least important criterion. table 3 represents details of the five major suppliers among whom the most competent one needs to be identified using d-marcos method. these five suppliers are now appraised by a team consisting of three decision makers from the steel melting unit, materials management and finance department having more than 15 years of industrial experience. based on their varying expertise and knowledge, they are assigned weights with 0.4, 0.35 and 0.25 respectively. they are asked to assess the relative performance of the considered suppliers with respect to each criterion using a 1-9 scale, where 1-2 represent the least scores, 8-9 mark the highest scores, 4-6 denote medium scores, and 3 and 7 are intermediate scores. table 2. list of the criteria for supplier selection criterion description weight product quality (c1) it takes into account worth of a product in compliance to a particular threshold value for minimum assured life and guaranteed performance. 0.312 delivery compliance (c2) it accounts for the time within which delivery is met. scheduled delivery of materials is much needed to ensure proper inventory level such that production never gets disrupted due to unavailability of resources. 0.223 price (c3) it is the monetary value of an item to be paid by the organization to the concerned supplier. 0.208 technological capability (c4) with the advancements of cutting edge technology, product and service must be proficient enough to meet various requirements of the organization even beyond maintaining the delivery schedule. it deals with the compatibility of a supplier to upkeep with the advanced technology. 0.125 production capability (c5) it primarily deals with the ability of a supplier to deliver the required quantity of material at the specified time keeping in mind the fluctuating requirements. it is often graded with respect to standard certifications. 0.114 financial strength (c6) it stresses on the overall financial stability of a supplier with respect to changing market scenario. it is ranked based on a particular supplier’s annual turnover. 0.009 electronic transaction capability (c7) with technological advancements, electronic transaction capability is a much needed sophistication for a supplier to ensure online payment with reduction of other additional costs. 0.006 chattopadhyay et al./decis. mak. appl. manag. eng. 3 (2) (2020) 49-69 60 table 3. list of the shortlisted suppliers supplier description s1 an almost new organization with presence in different countries is well preferred by the steel industries due to its capability to deliver functional refractory at reasonably low price. s2 it was established in early 70s as an msme and proceeded towards adapting better technology of late, but has already succeeded in carving its name amongst the top suppliers of refractory materials. s3 it started its journey in early 70s and has become a well-known supplier of regular refractory materials. with the introduction of state-of-the-art technology, it has also collaborated with other international manufacturers to sustain through the competitive race. s4 it was established in 80s with modern technology and management. it has always been adaptive to the latest technologies grabbing the steel industry’s attention. s5 established in late 90s, it grossly depends on outsourcing of materials with high variation in product quality and hence, is supposed to be a risky supplier. tables 4-6 respectively show the corresponding evaluation matrices developed by the participating decision makers (dm1, dm2 and dm3) while assessing the performance of each of the five suppliers with respect to each criterion in terms of d numbers. for example, in table 4, using the 1-9 scale, dm1 assigns scores 7 and 8 with 50% assurance in each case while appraising supplier s1 with respect to criterion c1. similarly, in table 5, dm2 is 80% confident to assign a score of 6 to supplier s1 with respect to criterion c1. the dm2 is in a dilemma (20% chance) while appraising supplier s1 with respect to criterion c1, i.e. in 20% cases, dm2 is not assured to provide any score to supplier s1 against c1. in table 6, dm3 is 100% assured to assign a score of 6 to supplier s1 against criterion c1. now, based on the individual evaluation matrices by the three decision makers and using properties (2)-(4) of d numbers, the aggregated d number scores are computed in table 7. it is observed that the scores assigned to supplier s1 with respect to criterion c1 by dm1, dm2 and dm3 are respectively d1 = {(7, 0.5), (8, 0.5)}, d2 = {(6,0.8)} and d3 = {(6,1)}. therefore, the aggregated score for supplier s1 against criterion c1 is derived as: 1 2 3 ( ( )) {(6.5, 0.35), (7, 0.35)}d d d d    . an integrated d-marcos method for supplier selection in an iron and steel industry 61 t a b le 4 . e v a lu a ti o n m a tr ix b y d m 1 s u p p li e r c ri te ri a c 1 c 2 c 3 c 4 c 5 c 6 c 7 s 1 {( 7 ,0 .5 ), (8 ,0 .5 )} {( 2 ,0 .5 ), (3 ,0 .5 )} {( 1 ,1 )} {( 7 ,1 )} {( 8 ,0 .2 ), (7 ,0 .8 )} {( 8 ,1 )} {( 7 ,1 )} s 2 {( 8 ,1 )} {( 3 ,1 )} {( 5 ,1 )} {( 7 ,0 .2 ), (8 ,0 .8 )} {( 7 ,1 )} {( 8 ,1 )} {( 7 ,0 .8 ), (8 ,0 .2 )} s 3 {( 7 ,0 .8 ), (8 ,0 .2 )} {( 1 ,0 .8 )( 2 ,0 .2 )} {( 3 ,0 .5 ), (4 ,0 .5 )} {( 9 ,1 )} {( 9 ,1 )} {( 7 ,0 .6 ), (8 ,0 .4 )} {( 7 ,1 )} s 4 {( 6 ,0 .5 ), (7 ,0 .5 )} {( 3 ,0 .8 )} {( 4 ,1 )} {( 8 ,1 )} {( 8 ,1 )} {( 9 ,1 )} {( 8 ,1 )} s 5 {( 7 ,0 .5 )} {( 2 ,1 )} {( 3 ,0 .2 ), (4 ,0 .8 )} {( 8 ,0 .4 ), (7 ,0 .6 )} {( 7 ,0 .8 ), (8 ,0 .2 )} {( 7 ,0 .8 ), (8 ,0 .2 )} {( 9 ,1 )} chattopadhyay et al./decis. mak. appl. manag. eng. 3 (2) (2020) 49-69 62 t a b le 5 . e v a lu a ti o n m a tr ix b y d m 2 s u p p li e r c ri te ri a c 1 c 2 c 3 c 4 c 5 c 6 c 7 s 1 {( 6 ,0 .8 } {( 2 ,1 )} {( 1 ,1 )} {( 7 ,0 .6 ), (6 ,0 .4 )} {( 7 ,1 )} {( 7 ,1 )} {6 ,1 )} s 2 {( 7 ,1 )} {( 2 ,0 .7 ), (3 ,0 .3 )} {( 4 ,1 )} {( 9 ,1 )} {( 8 ,1 )} {( 7 ,0 .6 ), (8 ,0 .4 )} {( 8 ,1 )} s 3 {( 7 ,1 )} {( 1 ,1 )} {( 3 ,0 .7 ), (2 ,0 .3 )} {( 8 ,1 )} {( 9 ,0 .8 ), (8 ,0 .2 )} {( 8 ,0 .7 ), (7 ,0 .3 )} {( 7 ,1 )} s 4 {( 6 ,0 .3 ), (7 ,0 .7 )} {( 2 ,1 )} {( 4 ,0 .7 ), (5 ,0 .3 )} {( 8 ,0 .5 ), (7 ,0 .5 )} {( 8 ,0 .8 ), (9 ,0 .2 )} {( 9 ,1 )} {( 8 ,0 .5 ), (7 ,0 .5 )} s 5 {( 6 ,0 .8 )} {( 2 ,0 .9 )} {( 3 ,0 .6 ), (4 ,0 .4 )} {( 7 ,0 .2 ), (6 ,0 .8 )} {( 7 ,1 )} {( 7 ,1 } {( 9 ,1 )} an integrated d-marcos method for supplier selection in an iron and steel industry 63 t a b le 6 . e v a lu a ti o n m a tr ix b y d m 3 s u p p li e r c ri te ri a c 1 c 2 c 3 c 4 c 5 c 6 c 7 s 1 {( 6 ,1 )} {( 2 ,1 )} {( 1 ,1 )} {( 7 ,0 .6 ), (8 ,0 .4 )} {( 8 ,1 )} {( 7 ,1 )} {( 7 ,1 )} s 2 {( 8 ,1 )} {( 2 ,0 .5 ), (3 ,0 .5 )} {( 5 ,1 )} {( 9 ,0 .6 ), (8 ,0 .4 )} {( 8 ,1 )} {( 7 ,0 .5 ), (8 ,0 .5 )} {( 9 ,1 )} s 3 {( 7 ,0 .7 )} {( 1 ,1 )} {( 3 ,0 .4 ), (4 ,0 .6 )} {( 9 ,0 .3 ), (8 ,0 .7 )} {( 9 ,1 )} {( 8 ,1 )} {( 9 ,1 )} s 4 {( 7 ,0 .6 ), (8 ,0 .4 )} {( 3 ,1 )} {( 4 ,0 .6 )} {( 8 ,1 )} {( 8 ,0 .8 ), (9 ,0 .2 )} {( 8 ,1 )} {( 8 ,1 )} s 5 {( 6 ,0 .7 ), (7 ,0 .3 )} {( 1 ,0 .3 ), (2 ,0 .7 )} {( 3 ,0 .8 ), (4 ,0 .2 )} {( 7 ,1 )} {( 7 ,1 )} {( 7 ,0 .8 ), (8 ,0 .2 )} {( 9 ,1 ) chattopadhyay et al./decis. mak. appl. manag. eng. 3 (2) (2020) 49-69 64 t a b le 7 . a g g re g a te d d e ci si o n m a tr ix f o r th e s u p p li e r se le ct io n p ro b le m s u p p li e r c ri te ri a c 1 c 2 c 3 c 4 c 5 c 6 c 7 s 1 {( 6 .5 ,0 .3 5 ), (7 ,0 .3 5 )} {( 2 ,0 .5 ), (2 .5 ,0 .5 )} {( 1 ,1 )} {( 7 ,0 .3 7 5 ), (6 .7 5 ,0 .3 1 2 5 ), (7 .2 5 ,0 .3 1 2 5 )} {( 7 .7 5 ,0 .4 ), (7 .2 5 ,0 .6 )} {( 7 .5 ,1 )} {( 6 .7 5 ,1 )} s 2 {( 7 .7 5 ,1 )} {( 2 .5 ,0 .3 2 5 ), (2 .7 5 ,0 .3 7 5 ), (3 ,0 .3 )} {( 4 .7 5 ,1 )} {( 8 ,0 .1 8 3 2 5 ), (8 .5 ,0 .3 3 3 2 5 ), (8 .2 5 ,0 .3 1 6 5 ), (7 .7 5 ,0 .1 6 6 5 )} {( 7 .5 ,1 )} {( 8 .2 5 ,0 .3 7 5 ), (7 .5 ,0 .3 1 8 7 5 ), (8 ,0 .3 0 6 2 5 )} {( 7 .7 5 ,0 .6 ), (8 .2 5 ,0 .4 )} s 3 {( 7 ,0 .3 4 2 5 ), (7 .5 ,0 .1 9 2 5 )} {( 1 ,0 .6 ), (1 .5 ,0 .4 )} {( 3 ,0 .2 ), ( 3 .5 ,0 .2 ), (3 .2 5 ,0 .3 ), (3 .7 5 ,0 .1 6 5 ), (2 .7 5 ,0 .1 3 5 )} {( 8 .5 ,0 .5 2 2 ), (8 .7 5 ,0 .4 7 7 )} {( 9 ,0 .5 3 3 3 ), (8 .7 5 ,0 .4 6 6 6 )} {( 7 .5 ,0 .2 9 ), (8 ,0 .2 4 ), (7 .2 5 ,0 .2 6 ), (7 .7 5 ,0 .2 1 )} {( 7 .5 ,1 )} s 4 {( 6 .2 5 ,0 .1 4 5 ), (6 .7 5 ,0 .3 ), (7 .2 5 ,0 .1 5 5 ), (7 ,0 .2 ), (6 .5 ,0 .2 )} {( 2 .7 5 ,0 .6 )} {( 4 ,0 .3 3 1 2 5 ), (4 .2 5 ,0 .3 0 6 2 5 )} {( 8 ,0 .5 ), (7 .7 5 ,0 .5 )} {( 8 .2 5 ,0 .3 7 5 ), (8 .5 ,0 .2 7 5 ), (8 ,0 .3 5 )} {( 8 .7 5 ,1 )} {( 8 ,0 .5 ), (7 .7 5 ,0 .5 )} s 5 {( 6 .5 ,0 .1 7 5 ), (6 .7 5 ,0 .1 5 5 )} {( 1 .7 5 ,0 .3 2 5 ), (2 ,0 .3 5 )} {( 3 ,0 .2 7 5 ), (4 ,0 .1 9 ), ( 3 .5 ,0 .3 ), (3 .7 5 ,0 .2 6 ), (3 .2 5 ,0 .1 4 )} {( 7 .5 ,0 .2 ), (7 ,0 .2 5 ), (6 .7 5 ,0 .3 ), (7 .2 5 ,0 .2 5 )} {( 7 ,0 .6 ), (7 .5 ,0 .4 )} {( 7 .5 ,0 .2 ), (7 ,0 .3 5 ), (7 .7 5 ,0 .1 5 ), (7 .2 5 ,0 .3 )} {( 9 ,1 )} an integrated d-marcos method for supplier selection in an iron and steel industry 65 it is worthwhile to mention here that in this supplier selection problem, the participating decision makers have been assigned weights with 0.4, 0.35 and 0.25 respectively depending on their varying experience and expertise. thus, the combination operation for d numbers is first performed between dm2 and dm3 with minimum weights, and then the corresponding d number for dm1 is taken into consideration for the combination operation. now, based on the developed aggregated decision matrix in terms of d numbers, the corresponding consolidated matrix x is formulated using eq. (3). for instance: x11 = ((6.5×0.35) + (7×0.35)) = 4.72. in the similar direction, dm1, dm2 and dm3 respectively evaluate the performance of supplier s2 against criterion c4 as d1 = {(7,0.2), (8,0.8)}, d2 = {(9,1)}and d3 = {(9,0.6),(8,0.4)} in terms of d numbers. the aggregated score for supplier s2 with respect to criterion c4 is calculated as: ))(( 321 dddd  = {(8.0, 0.18325), (8.5, 0.33325), (8.25, 0.3165), (7.75, 0.1665)} thus, the value of element x24 in the consolidated matrix becomes: 2.8))1665.075.7()3165.025.8()33325.05.8()18325.08((24 x based on the procedural steps of d-marcos method, another matrix x' (extended matrix) is formulated from the consolidated matrix by defining two additional rows, indicating the ideal (ai) and anti-ideal (aai) solutions at the bottom and top of the consolidated matrix respectively. now, based on the type of the considered criterion and employing eqs. (15)-(16), the related normalized decision matrix is obtained. 7654321 ccccccc                        1111111 183.081.082.041.094.028.0 87.0193.091.038.073.087.0 83.087.01131.0149.0 88.090.084.095.021.044.01 75.086.084.081.0153.061.0 75.083.081.081.021.044.028.0 5 4 3 2 1 ai s s s s s aai n                  929.72.709.706.427.118.2 87.776.823.887.763.265.175.6 5.760.788.861.826.320.184.3 95.793.75.72.875.474.275.7 75.65.745.77125.272.4 x 7654321 ccccccc                        976.888.861.8120.175.7 929.72.709.706.427.118.2 87.776.823.887.763.265.175.6 5.760.788.861.826.320.184.3 95.793.75.72.875.474.275.7 75.65.745.77125.272.4 75.629.72.7775.474.218.2 5 4 3 2 1 ai s s s s s aai x chattopadhyay et al./decis. mak. appl. manag. eng. 3 (2) (2020) 49-69 66 the weighted normalized decision matrix is then computed by multiplying each element of the normalized matrix with the corresponding criteria weights. 7654321 ccccccc                        0060.00090.01140.01250.02080.02230.03120.0 0060.00074.00923.01025.00853.02096.00874.0 0052.00090.01060.01138.00790.01628.02714.0 0050.00078.01140.01250.00645.02230.01529.0 0053.00081.00958.01188.00436.00981.03120.0 0045.00078.00958.01013.02080.01182.01903.0 0045.00074.00923.01013.00436.00981.00874.0 5 4 3 2 1 ai s s s s s aai y using eqs. (18)-(23), the positive and negative degrees of utility, and value of the utility function for all the competing suppliers are estimated, as shown in table 8. the detailed computational steps for determining the utility function value for supplier s1 are explained as below: for ideal solution: tai = 0.3120 + 0.2230 + 0.2080 + 0.1250 + 0.1140 + 0.0090 + 0.0060 = 0.9970 for anti-ideal solution: tai = 0.0874 + 0.0981 + 0.0436 + 0.1013 + 0.0923 + 0.0074 + 0.0045 = 0.4346 for supplier s1: t1 = 0.1903 + 0.1182 + 0.2080 + 0.1013 + 0.0958 + 0.0078 + 0.0045 = 0.7259 7281.0 9970.0 7259.0 1   k ; 6702.1 4346.0 7259.0 1   k ; ;696410.0 7281.06702.1 6702.1 )( 1    kf 303590.0 7281.06702.1 7281.0 )( 1    kf 0.643001 303590.0 303590.01 696410.0 696410.01 1 6702.17281.0 )( 1      kf in order to identify the most apposite supplier for providing refractory materials to the considered iron and steel making industry, they are now ranked based on the computed values of utility function. it is observed that supplier s4 with the maximum utility value of 0.661829 is ranked first, closely followed by supplier s1. the performance of suppliers s2 and s3 is almost similar. on the other hand, supplier s5 would be considered with least preference. table 8. estimation of utility functions for the candidate suppliers supplier ti  ik  ik )(  ikf )(  ikf )( ikf rank s1 0.7259 1.6702 0.7281 0.303590 0.696410 0.643001 2 s2 0.6817 1.5686 0.6838 0.303587 0.696412 0.603879 4 s3 0.6922 1.5927 0.6943 0.303585 0.696414 0.613154 3 s4 0.7472 1.7193 0.7494 0.303560 0.696439 0.661829 1 s5 0.5905 1.3587 0.5923 0.303588 0.696412 0.523066 5 an integrated d-marcos method for supplier selection in an iron and steel industry 67 5. conclusions this paper proposes integration of d numbers with marcos method for effective selection of suppliers for refractory materials in an iron and steel industry in india. for this purpose, the relative performance of five competing suppliers is evaluated with respect to seven conflicting criteria using d numbers based on the opinions of three decision makers with varying knowledge and expertise. the marcos method is later employed for ranking of the considered suppliers. it has already been acknowledged that accounting for uncertainty involved in supplier selection process for effective scm system development is an important task in today’s manufacturing environment. although there are several approaches, like fuzzy set theory, d-s theory etc. to deal with uncertainty in decision making processes, the concept of d numbers supersedes the others with respect to its ability to provide more robust and flexible results while taking into consideration varied opinions of individual decision makers who can evaluate the relative performance of the participating suppliers with varying degrees of uncertainty. thus, this integrated mcdm tool can be efficiently adopted in other domains of decision making, like selection of optimal maintenance strategy, plant layout, inventory control policy, machine tool etc. in uncertain manufacturing environment. author contributions: each author has participated and contributed sufficiently to take public responsibility for appropriate portions of the content. funding: this research received no external funding. conflicts of interest: the authors declare no conflicts of interest. references abdullah, l., chan, w., & afshari, a. (2019). application of promethee method for green supplier selection: a comparative result based on preference functions. journal of industrial engineering international, 15, 271-285. abdulshahed, a., badi, i., & blaow, m. (2017). a grey-based decision-making approach to the supplier selection problem in a steelmaking company: a case study in libya. grey systems: theory and application, 7(3), 385-396. azimifard, a., moosavirad, s. h., & ariafar, s. (2018). selecting sustainable supplier countries for iran’s steel industry at three levels by using ahp and topsis methods. resources policy, 57, 30-44. badi, i., abdulshahed, a. m., & shetwan, a. (2018). a case study of supplier selection for a steelmaking company in libya by using the combinative distance-based assessment (codas) model. decision making: applications in management and engineering, 1(1), 1-12. banaeian, n., mobli, h., fahimnia, b., nielsen, i. e., & omid, m. (2018). green supplier selection using fuzzy group decision making methods: a case study from the agrifood industry. computers & operations research, 89, 337-347. dargi, a., anjomshoae, a., galankashi, m.r., memari, a., & tap, m.b. (2014). supplier selection: a fuzzy-anp approach. procedia computer science, 31, 691-700. chattopadhyay et al./decis. mak. appl. manag. eng. 3 (2) (2020) 49-69 68 de boer, l., labro, e., & morlacchi, p. (2001). a review of methods supporting supplier selection. european journal of purchasing & supply management, 7(2), 75-89. deng, x., hu, y., deng, y., & mahadevan, s. (2014a). environmental impact assessment based on d numbers. expert systems with applications, 41(2), 635-643. deng, x., hu, y., deng, y., & mahadevan, s. (2014b). supplier selection using ahp methodology extended by d numbers. expert systems with applications, 41(1), 156167. deng, y. (2012). d numbers: theory and applications. journal of information & computational science, 9(9), 2421-2428. dickson, g. w. (1966). an analysis of vendor selection and the buying process. journal of purchasing, 2(1), 5-17. gnanasekaran, s., velappan, s., & manimaran, p. (2010). an integrated model for supplier selection using fuzzy analytical hierarchy process: a steel plant case study. international journal of procurement management, 3(3), 292-315. jain, n., & singh, a. (2018). supplier selection in indian iron and steel industry: an integrated mcdm approach. international journal of pure and applied mathematics, 118(20), 455-459. jain, n., & singh, a. (2020). sustainable supplier selection under must-be criteria through fuzzy inference system. journal of cleaner production, 248, 119275. jain, n., & singh, a. r. (2019). sustainable supplier selection criteria classification for indian iron and steel industry: a fuzzy modified kano model approach. international journal of sustainable engineering, 13(1), 17-32. javad, m. o., darvishi, m., & javad, a. o. (2020). green supplier selection for the steel industry using bwm and fuzzy topsis: a case study of khouzestan steel company. sustainable futures, 2, 100012. kamath, g., naik, r., & shiv prasad h. c. (2016). a vendor’s evaluation using ahp for an indian steel pipe manufacturing company. international journal of the analytic hierarchy process, 8(3), 442-461. kar, a.k. (2015a). a hybrid group decision support system for supplier selection using analytic hierarchy process, fuzzy set theory and neural network. journal of computational science, 6, 23-33. kar, a.k. (2015b). reinvestigating vendor selection criteria in the iron and steel industry. international journal of procurement management, 8(5), 570-586. kumar, s., kumar, s., & barman, a. g. (2018). supplier selection using fuzzy topsis multi criteria model for a small scale steel manufacturing unit. procedia computer science, 133, 905-912. liu, y. n. (2010). a case study of evaluating supplier’s selection criteria in a steel bars manufacturer. in: proceedings of ieee international conference on industrial engineering and engineering management, 994-998. parkhi, s. (2015). a study of evolution and future of supply chain management. aims international journal of management, 9(1), 95-106. an integrated d-marcos method for supplier selection in an iron and steel industry 69 parthiban, p., zubar, h. a., & katakar, p. (2013). vendor selection problem: a multicriteria approach based on strategic decisions. international journal of production research, 51(5), 1535-1548. rezaei, j. (2015). best-worst multi-criteria decision-making method. omega, 53, 4957. stević, ž. (2017). criteria for supplier selection: a literature review. international journal of engineering, business and enterprise applications, 1(1), 23-27. stević, ž., pamučar, d., puška, a., & chatterjee, p. (2020). sustainable supplier selection in healthcare industries using a new mcdm method: measurement of alternatives and ranking according to compromise solution (marcos). computers & industrial engineering, 140, 106231. tahriri, f., osman, m., ali, a., yusuff, r., & esfandyari, a. (2008). ahp approach for supplier evaluation and selection in a steel manufacturing company. journal of industrial engineering and management, 1(2), 54-76. vimal, j., chaturvedi, v., & dubey, a. (2012). application of topsis method for supplier selection in manufacturing industry. international journal of research in engineering and applied sciences, 2(5), 25-35. ying-tuo, p., & yang, c. (2011). iron and steel companies green suppliers’ selection model based on vague sets group decision-making method. in: proceedings of the international conference on electronics, communications and control, 2702-2705. © 2018 by the authors. submitted for possible open access publication under the terms and conditions of the creative commons attribution (cc by) license (http://creativecommons.org/licenses/by/4.0/). plane thermoelastic waves in infinite half-space caused decision making: applications in management and engineering vol. 4, issue 1, 2021, pp. 85-103. issn: 2560-6018 eissn: 2620-0104 doi: https://doi.org/10.31181/dmame2104085g * corresponding author. e-mail addresses: gozde.koca@bilecik.edu.tr (g. koca), sdayldrmm@gmail.com (s. yıldırım) bibliometric analysis of dematel method gözde koca 1* and seda yıldırım 1 1 bilecik seyh edebali university, department of business administration, bilecik, turkey received: 11 january 2021; accepted: 3 february 2021; available online: 11 february 2021. original scientific paper abstract: in this study, a bibliometric analysis of the studies evaluated with dematel (decision making experiment and evaluation laboratory method), one of the mcdm methods in web of science, was performed according to various performance indicators. the total number of dematel publications examined is 1963 documents. when dematel studies are evaluated in terms of countries, it is seen that china is the leader (553 documents; 28.17%). the most cooperative country is china. the country with the highest h-index is taiwan (62). journal of cleaner production is the most efficient journal (96; 4.88%). national chiao tung university (102, 5.19%) is ranked as the most efficient institution in dematel research. among the most used words are "model", "dematel", "selection", "management", "fuzzy dematel". key words: multi-criteria decision making, bibliometric, web of science, dematel. 1. introduction decision making can be defined as individuals and organizations choosing the best alternative under current conditions to achieve their goals. decision making is an interdisciplinary field of research that attracts researchers and academics in almost every field. while intelligence, intuition and experience are important in decision making, it is equally important to use scientific methods. mcdm methods (multi-criteria decision-making methods) have been developed for the correct evaluation of multiple different criteria in solving complex problems. mcdm methods refer to the process of evaluating many criteria in a problem at the same time and assigning numerical evaluation to alternatives. mcdm allows decisionmakers to make evaluations and make decisions in multiple dimensions by bringing together multiple disciplines such as mathematics, management, social sciences, and economics (yıldırım & önder, 2018: 15). each method has solution logic in itself (çelikbilek, 2018: 3). the mcdm process consists of two stages. in the first of these stages, all the objectives and provisions given according to the alternatives are mailto:gozde.koca@bilecik.edu.tr mailto:sdayldrmm@gmail.com koca and yıldırım/decis. mak. appl. manag. eng. 4 (1) (2021) 85-103 86 brought together, in the second stage; the most appropriate decision is made by evaluating the alternatives among the combined provisions. (aytaç & gürsakal, 2015: 250). dematel (the decision making trial and evaluation laboratory), one of the mcdm methods, was developed in 1972 by the battelle memorial institute of geneva research center. the method is used in solving complex problem groups (shgeh et al., 2010: 277-282). the advantage of the dematel method is that it separates the distributor and receiver groups in the problem and determines the relationships between the criteria based on graph theory (impact-directional diagram) (lin & tzeng, 2009: 9686). the dematel method, which assumes that all criteria determined for the decision-making problem are in interaction with each other, evaluates the effect levels among the criteria. in the method, factors that are higher than the other criteria are called distributive, and criteria whose exposure level is higher than the effect on the system are called buyers (karaoğlan, 2016: 13). the increasing interest in mcdm methods has caused the publication of dematel method to increase continuously. in this study, bibliometric analysis was performed on the studies related to the method to interpret and summarize the information confusion caused by the continuous increase of the publications made with the dematel method. the reason why the dematel method is examined in this study is that it covers a very different literature that contributes from different disciplines. apart from this, it is to show how the method is examined in different disciplines by revealing causality and by revealing the importance of its differentiation from other mcdm methods. bibliometric analysis is an analysis method that examines scientific studies with the help of numerical analysis and statistics and shows the activities and current status of scientific studies in the field (çetinkaya bozkurt & çetin, 2016: 32). accordingly, bibliometric analysis reveals the productivity of countries, institutions and authors, citation analysis of countries, institutions and authors, which type of documents are used more, and how much the documents are distributed, and cooperation maps. for the research, the 1963 document searched from the web of science database with the subject "dematel" on 12.12.2020 was found in the bibliometrix library of the r studio program and analyzed with biblioshiny. all studies on the dematel method between 1999 and 2020 in the web of science database were included in the analysis. along with the analysis, annual studies and total citation rates, the productivity of countries, the number of citations and the cooperation map between countries, the most used journals and the number of citations in the studies conducted on the subject, the most efficient universities, the fields of science in which the dematel method is used and in which journals the studies were published the most, the most productive authors and citation rates, the most cited articles and the most used words in the articles written on the subject and the conceptual structure of the field were shown. 2. literature overview the study conducted by cole and eales in 1917 in the literature is known as the first bibliometric study. in this study; analyzes of studies published in the field of anatomy between 1550-1860 were made. after this study, an analysis was made in the field of historical science by e.wyndham hulme, a librarian at the british patent office in 1923. later, in 1927, p.l.k. gross and e.m. a citation analysis study was conducted by gross to evaluate the bibliography of the articles published in the journal of the biblometric analysis of dematel method 87 american chemical society. the first two studies were based on bibliographic features, not citations, and in gross & gross's study, citation analysis was performed (lawani, 1981: 295, hotamışlı & erem, 2014: 3). on the subject of mcdm, there are many studies conducted in the related literature. popular tools such as vosviewer, rbibliometric package were used in some of these studies. bibliometric studies made using popular tools in the field of mcdm are summarized in table 1 below. table 1. bibliometric studies using popular tools in the mcdm field authors year keyword used time span number of publications reviewed bragge et al. 2010 multi objective, multi criteria 19702007 15198 guerrero-baena et al. 2014 mcdm 19802012 347 zavadkas et al. 2014 mcdm review papers 19902013 71 tramarico et al. 2015 analytic hierarchy process and supply chain 19902014 116 blanco-mesa et al. 2017 fuzzy decision-making 19702014 8135 liu & liao 2017 fuzzy decision 19702015 13901 zyoud and funchs-hanusch 2017 ahp ve topsis 19762016 10188 ahp 2412 topsis peng & dai 2018 neutrosophic set 19982017 137 yu et al. 2018 multiple criteria decisionmaking 19772016 4464 liao et al. 2019 hesitant fuzzy sets 20092018 484 morkūnaitė et al. 2019 cultural heritage buildings with mcdm 1994– 2018 1039 there are literature reviews in the field of mcdm without using popular bibliometric tools. abu-taha (2011) reviewed more than 90 publications on mcdm in the field of renewable energy. he summarized both the application areas and the methodologies used in these publications. as a result of the literature review, it is revealed that ahp is the most used method among all mcdm methodologies. kahraman et al. (2015) examined the mcdd literature by dividing it into two parts as multi-specific and multi-purpose. in particular, they focused on multi-purpose decision making. they provided tables and graphs for each method (fuzzy ahp, fuzzy vikor, fuzzy topsis, fuzzy electre, etc.). mardani, et al. (2015) examined a total of 393 articles published in more than 120 peer-reviewed journals between 2000 and 2014. especially in the fields of energy, environment, and sustainability, they found that mcdm methods are frequently used. gül et al. (2016) conducted a literature review on vikor and fuzzy vikor applications and reviewed 343 publications in total. this comprehensive literature review they have done provides insight into vikor applications for researchers and practitioners. in their study, renganath & suresh (2016) analyzed the literature of mcdm methods used in supplier selection. after all, they said that the most popular method was fuzzy topsis. stojčić et al. (2019) koca and yıldırım/decis. mak. appl. manag. eng. 4 (1) (2021) 85-103 88 reviewed the literature on the application of mcdm methods in the field of sustainable engineering. they analyzed 108 articles scanned in the web of science (wos) database between 2008-2018. as a result, they found that sustainable engineering is a very suitable field for the use of mcdm. liu et al. (2019) conducted a comprehensive review of fmea (error type and effects analysis) studies using mcdm approaches to evaluate and prioritize error types. they reviewed 169 articles published between 1998-2018. this research sup0ports and provides insight into academics and practitioners in effectively adopting mcdm methods to overcome the shortcomings of traditional fmea. chowdhury and paul (2020) conducted a literature analysis of mcdm methods used in corporate sustainability between 2007 and 2019. as a result of this analysis, in which they examined 52 publications, they determined that the most used methods were ahp and topsis. 3. method bibliometric analysis is to make the scope of research in a particular area of interest both quantitatively and qualitatively (ellegaard and wallin, 2015: 1809). bibliometry developed for library and information sciences is used to classify research according to publications, times, and journals (merigo & yang, 2017: 37). in other words, bibliometry strengthens the scientific literature by understanding the research literature better (osareh, 1996: 149). stevens (1953) divided bibliometric studies into two main areas as seen below. descriptive studies contribute to authors, journals, years, and discipline by categorizing publications by country, while evaluators show where and how many publications are cited. 1. descriptive studies  country or geographic location  timespan discipline or subject area 2. evaluative studies  source  citation the analysis made allows identifying early trends in studies conducted in any field (ellegaard and wallin, 2015: 1809). in general terms, it describes scientific collaboration through collaborations between researchers, institutions, and countries. some new tools have been introduced to generate more broadcast data and provide a wide variety of indicators as listed in table 2. in this study, r-biblioshiny was used. table 2. popular tools for bibliometric analysis tools practitioners bibexcel olle persson authors authors ' frequency tables pajek vladimir batagelj and andrej mrvar citespace chaomeichen vosviewer nees jan van eck and ludowaltman r-bibliometric package massimo aria and carrado cuccurullo biblometric analysis of dematel method 89 4. results 1963 dematel publications in 800 sources (journals, books, etc) between 1999 and 2020 in the wos database were examined. dematel publications mostly consist of articles, book chapters, early access, proceedings papers and, review publications. average citations per document are 15,39 and average citations per year per doc is 3.274. figure 1 shows the annual number of citations of the studies conducted with the dematel method. the most citations to dematel's work took place in 2015 and 2018. it is seen that dematel studies get quite high citations. this shows that the method has a very dynamic structure. the distribution of the examined publications by years is given in figure 2. as can be understood from figure 2, the studies made with the dematel method have increased over the years. especially after 2015, the number of studies conducted with the method has increased. it is seen that most work on the method is in 2020. figure 1. number of citations by years figure 2. number of articles by years 0 1000 2000 3000 4000 1995 2000 2005 2010 2015 2020 total number of citations 0 50 100 150 200 250 300 350 400 1995 2000 2005 2010 2015 2020 total number of articles koca and yıldırım/decis. mak. appl. manag. eng. 4 (1) (2021) 85-103 90 table 3 shows the 20 most productive countries in the dematel method. according to the table, it is seen that the most productive country is china (553; 28.171%). after china, respectively, taiwan (519; 26.439%), iran (251; 12.787%), india (241; 12.277%) and turkey (184; 9.373%) are ranked. with the highest h-index of 62, and it was recorded by taiwan china (41), india (29), iran (27), turkey (24), and the united states (24) respectively. considering the citation rates of the countries, it is seen that the most cited country is taiwan (12884). after taiwan, respectively, china (6228), india (2892), iran (2878), and turkey (2499) are ranked. according to the number of studies of the countries, it is seen that the country with the highest citation average is denmark with 50.87%. table 3. ranking of top twenty most productive countries country no. of documents % h-index no. of citations average citations china 553 28,171 41 6228 11,26 taiwan 519 26,439 62 12884 24,82 iran 251 12,787 27 2878 11,42 india 241 12,277 29 2892 12,00 turkey 184 9,373 24 2499 13,58 usa 74 3,770 24 1710 23,11 england 63 3,209 17 839 13,32 malaysia 57 2,904 16 679 11,91 australia 41 2,089 11 492 12,00 spain 34 1,732 11 460 13,53 serbia 32 1,630 16 1039 32,47 denmark 31 1,579 23 1577 50,87 poland 31 1,579 7 217 7,00 lithuania 30 1,528 13 796 26,53 canada 29 1,477 8 354 12,21 italy 27 1,375 11 467 17,30 philippines 24 1,223 9 476 19,83 south korea 24 1,223 7 284 11,83 japan 23 1,172 8 524 22,78 indonesia 21 1,070 3 128 6,10 the world density map is shown in figure 3 below. the countries where dematel studies are carried out the most are listed from dark to light. countries with gray color do not have studies on the method. biblometric analysis of dematel method 91 figure 3. the world density map the most cooperating twenty countries according to the number of documents are shown in table 4. according to the table, among the countries with the highest cooperation, taiwan-china is the first with 74 documents, the usa-china is the second with 31 documents, and the uk-china is the third with 22 documents. table 4. the twenty most cooperative countries according to the number of documents from to frequency taiwan china 74 usa china 31 united kingdom china 22 india united kingdom 20 turkey china 20 china australia 17 iran lithuania 16 iran malaysia 14 india denmark 11 iran usa 11 malaysia saudi arabia 11 china denmark 10 china canada 9 india china 9 india usa 9 iran australia 9 taiwan usa 9 india lithuania 8 india spain 8 taiwan philippines 8 world cooperation map is given in figure 4. the countries where the lines are concentrated are determined as the countries that cooperate most with other koca and yıldırım/decis. mak. appl. manag. eng. 4 (1) (2021) 85-103 92 countries. accordingly, china the country with the highest cooperation with other countries, india, iran, taiwan, turkey, the uk and the us appear to be. figure 4. world cooperation map table 5 shows the sources of dematel publications. as shown in table 5 in this study, journal of cleaner production (96; 4,888%) has been the most comprehensive source of dematel research. then, sustainability (90; 4,582%) and expert system applications (77; 3,921%) journals follow. the most cited journal was determined to be the expert system applications journal with 7074 citations. besides, expert system applications journal has the highest h-index (48) and the highest average citation rate (91.87). then, it was seen that journal of cleaner production ranked second with 2895 citations. the journals with the highest h-indexes after the expert system application journal are journal of cleaner production (28), sustainability (16), computers & industrial engineering (16), applied soft computing (16), respectively. table 5. sources of dematel publications sources articles % hindex total citations average citations journal of cleaner production 96 4,888 28 2895 30,16 sustainability 90 4,582 16 741 8,23 expert systems with applications 77 3,921 48 7074 91,87 computers & industrial engineering 32 1,629 16 844 26,38 applied soft computing 26 1,324 16 917 35,27 benchmarking-an international journal 21 1,120 8 167 7,59 international journal of fuzzy systems 20 1,018 10 387 19,35 mathematical problems in engineering 20 1,018 7 216 10,80 biblometric analysis of dematel method 93 international journal of environmental research and public health 19 0,967 5 91 4,79 symmetry-basel 19 0,967 5 154 8,11 resources conservation and recycling 18 0,916 12 573 31,83 ieee access 17 0,866 4 40 2,35 international journal of production research 17 0,866 11 483 28,41 journal of intelligent & fuzzy systems 17 0,866 4 71 4,18 international journal of information technology & decision making 16 0,815 8 192 12,00 soft computing 16 0,815 6 288 18,00 international journal of production economics 15 0,764 13 1004 66,93 safety science 15 0,764 9 500 33,33 energies 14 0,713 5 68 4,86 technological and economic development of economy 14 0,713 8 331 23,64 table 6 shows the 20 most active universities in dematel research. accordingly, it is seen that the most productive university in dematel studies is national chiao tung university in taiwan with 102 documents (5,196). islamic azad university in iran ranks second with 90 documents (4,585) and nan kai university technology in china is third with 86 documents (4,381). the most cited university is national chiao tung university with 4344 citations and an average citation rate of 42.59%. also, national chiao tung university has the highest h-index (37). table 7 shows the ranking of the twenty most common areas in dematel studies. most of the published studies are in the field of computer science artificial intelligence (332; 16,904) and it was seen that the most used journal in this field was expert system with applications (77; 23,193%). following this area, the most common areas are environmental sciences (288; 14.664%), operations research management science (285; 14.511%), management (272; 13.849%), green sustainable science technology (235; 11.965%). table 8 shows the most productive twenty authors on dematel research. according to the table, with 121 documents (6.161%), tzeng g.h. seems to be. also, tzeng g.h is the author with the highest h-index (34) and the highest number of citations (4117). tzeng g.h. it is seen that the most prolific authors are tseng m.l. (38), dincer h. (36), and liou j.j.h (36). also, tseng m.l. is the second most cited author (1605). koca and yıldırım/decis. mak. appl. manag. eng. 4 (1) (2021) 85-103 94 t a b le 6 . t h e 2 0 m o st a ct iv e u n iv e rs it ie s in d e m a t e l r e se a rc h n a m e o f th e i n s ti tu ti o n n o . o f d o c u m e n ts % h -i n d e x t o ta l c it a ti o n s a v e r a g e c it a ti o n s c o u n tr y n a ti o n a l c h ia o t u n g u n iv e rs it y 1 0 2 5 ,1 9 6 3 7 4 3 4 4 4 2 ,5 9 t a iw a n is la m ic a z a d u n iv e rs it y 9 0 4 ,5 8 5 1 7 9 7 1 1 0 ,6 7 ir a n n a n k a i u n iv e rs it y t e ch n o lo g y 8 6 4 ,3 8 1 3 3 3 6 3 1 4 2 ,2 2 c h in a n a ti o n a l t a ip e i u n iv e rs it y 6 6 3 ,3 6 2 1 9 1 2 3 8 1 8 ,7 6 t a iw a n n a ti o n a l t a ip e i u n iv e rs it y o f t e ch n o lo g y 5 8 2 ,9 5 5 1 9 1 2 8 5 2 2 ,1 6 t a iw a n u n iv e rs it y o f t e h ra n 5 4 2 ,7 5 1 1 6 6 4 8 1 2 ,0 0 ir a n in d ia n i n st it u te o f t e ch n o lo g y s y st e m i it s y st e m 5 3 2 ,7 0 0 1 6 6 6 3 1 2 ,5 1 in d ia d a li a n u n iv e rs it y o f t e ch n o lo g y 5 1 2 ,5 9 8 1 8 1 1 1 4 2 1 ,8 4 c h in a n a ti o n a l t a iw a n n o rm a l u n iv e rs it y 4 0 2 ,0 3 8 1 2 5 0 9 1 2 ,7 3 t a iw a n is ta n b u l m e d ip o l u n iv e rs it y 3 6 1 ,8 3 4 8 1 6 6 4 ,6 1 t u rk e y a si a u n iv e rs it y t a iw a n 3 5 1 ,7 8 3 1 0 5 9 1 1 6 ,8 9 t a iw a n u n iv e rs it y o f e le ct ro n ic s ci e n ce t e ch n o lo g y o f c h in a 3 4 1 ,7 3 2 2 0 9 4 0 2 7 ,6 5 c h in a c h u n g h u a u n iv e rs it y 2 9 1 ,4 7 7 1 0 3 3 5 1 1 ,5 5 t a iw a n u n iv e rs it y o f s o u th e rn d e n m a rk 2 8 1 ,4 2 6 2 2 1 4 6 4 5 2 ,2 9 d e n m a rk v il n iu s g e d im in a s t e ch n ic a l u n iv e rs it y 2 8 1 ,4 2 6 1 3 7 8 8 2 8 ,1 4 l it h u a n ia c h in e se c u lt u re u n iv e rs it y 2 7 1 ,3 7 5 9 2 5 2 9 ,3 3 c h in a t a m k a n g u n iv e rs it y 2 7 1 ,3 7 5 1 6 7 5 9 2 8 ,1 1 t a iw a n n a ti o n a l c e n tr a l u n iv e rs it y 2 5 1 ,2 7 4 1 5 1 1 3 4 4 5 ,3 6 t a iw a n n a ti o n a l t a iw a n u n iv e rs it y o f s ci e n c e t e ch n o lo g y 2 4 1 ,2 2 3 9 8 1 0 3 3 ,7 5 t a iw a n s h a n g h a i ji a o t o n g u n iv e rs it y 2 4 1 ,2 2 3 1 1 3 6 0 1 5 ,0 0 c h in a biblometric analysis of dematel method 95 t a b le 7 : t h e t w e n ty m o st c o m m o n a re a s in d e m a t e l s tu d ie s s u b je c t a r e a n o . o f d o c u m e n ts % m o s t u s e d j o u r n a l n o . o f d o c u m e n ts % c o m p u te r s ci e n ce a rt if ic ia l in te ll ig e n c e 3 3 2 1 6 ,9 0 4 e x p e rt s y st e m s w it h a p p li ca ti o n s 7 7 2 3 ,1 9 3 e n v ir o n m e n ta l s ci e n ce s 2 8 8 1 4 ,6 6 4 jo u rn a l o f c le a n e r p ro d u ct io n 9 6 3 3 ,3 3 3 o p e ra ti o n s r e se a rc h m a n a g e m e n t s ci e n c e 2 8 5 1 4 ,5 1 1 e x p e rt s y st e m s w it h a p p li ca ti o n s 7 7 2 7 ,0 1 8 m a n a g e m e n t 2 7 2 1 3 ,8 4 9 b e n ch m a rk in g a n i n te rn a ti o n a l jo u rn a l 2 2 8 ,0 8 8 g re e n s u st a in a b le s ci e n ce t e ch n o lo g y 2 3 5 1 1 ,9 6 5 jo u rn a l o f c le a n e r p ro d u ct io n 9 6 4 0 ,8 5 1 e n g in e e ri n g e le ct ri ca l e le ct ro n ic 1 8 7 9 ,5 2 1 e x p e rt s y st e m s w it h a p p li ca ti o n s 7 7 4 1 ,1 7 6 e n g in e e ri n g i n d u st ri a l 1 8 7 9 ,5 2 1 c o m p u te rs i n d u st ri a l e n g in e e ri n g 3 2 1 7 ,1 1 2 c o m p u te r s ci e n ce i n te rd is ci p li n a ry a p p li ca ti o n s 1 6 1 8 ,1 9 8 c o m p u te rs i n d u st ri a l e n g in e e ri n g 3 2 1 9 ,8 7 6 e n v ir o n m e n ta l s tu d ie s 1 4 4 7 ,3 3 2 s u st a in a b il it y 9 0 6 2 ,5 0 0 e n g in e e ri n g e n v ir o n m e n ta l 1 3 1 6 ,6 7 0 jo u rn a l o f c le a n e r p ro d u ct io n 9 6 7 3 ,2 8 2 b u si n e ss 1 2 4 6 ,3 1 4 a fr ic a n j o u rn a l o f b u si n e ss m a n a g e m e n t 9 7 ,2 5 8 c o m p u te r s ci e n ce i n fo rm a ti o n s y st e m s 1 1 4 5 ,8 0 4 ie e e a cc e ss 1 7 1 4 ,9 1 2 e n g in e e ri n g m u lt id is ci p li n a ry 1 1 1 5 ,6 5 2 m a th e m a ti ca l p ro b le m s in e n g in e e ri n g 2 0 1 8 ,0 1 8 e co n o m ic s 1 0 5 5 ,3 4 6 e ch n o lo g ic a l a n d e co n o m ic d e v e lo p m e n t o f e co n o m y 1 4 1 3 ,3 3 3 e n g in e e ri n g m a n u fa ct u ri n g 8 8 4 ,4 8 1 in te rn a ti o n a l jo u rn a l o f p ro d u ct io n r e se a rc h 1 7 1 9 ,3 1 8 c o m p u te r s ci e n ce t h e o ry m e th o d s 7 9 4 ,0 2 2 jo u rn a l o f m u lt ip le v a lu e d l o g ic a n d s o ft c o m p u ti n g 5 6 ,3 2 9 e n e rg y f u e ls 7 0 3 ,5 6 4 e n e rg ie s 1 4 2 0 ,0 0 0 a u to m a ti o n c o n tr o l s y st e m s 6 1 3 ,1 0 6 in te rn a ti o n a l jo u rn a l o f f u z z y s y st e m s 2 0 3 2 ,7 8 7 e n g in e e ri n g c iv il 5 8 2 ,9 5 3 e n g in e e ri n g c o n st ru ct io n a n d a rc h it e ct u ra l m a n a g e m e n t 7 1 2 ,0 6 9 m u lt id is ci p li n a ry s ci e n ce s 5 4 2 ,7 4 9 s y m m e tr y b a se l 1 9 3 5 ,1 8 5 koca and yıldırım/decis. mak. appl. manag. eng. 4 (1) (2021) 85-103 96 table 8. the most productive twenty authors on dematel research authors articles % h-index total citations average citations tzeng gh 121 6,161 34 4117 34,02 tseng ml 38 1,935 19 1605 42,24 dincer h 36 1,833 8 165 4,58 liou jjh 36 1,833 17 1115 30,97 huang cy 35 1,782 7 394 11,26 yuksel s 32 1,629 8 161 5,03 kumar a 26 1,324 9 230 8,85 pamucar d 23 1,171 13 826 35,91 govindan k 22 1,120 16 1202 54,64 liu hc 21 1,069 16 1054 50,19 mangla sk 21 1,069 11 440 20,95 tsai sb 21 1,069 14 464 22,10 chuang yc 20 1,018 8 315 15,75 luthra s 20 1,018 12 478 23,90 lee yc 17 0,866 8 279 16,41 zavadskas ek 17 0,866 13 741 43,59 sarkis j 16 0,815 12 686 42,88 wu kj 16 0,815 9 429 26,81 wu hh 15 0,764 8 484 32,27 hsu cc 14 0,713 11 390 27,86 in table 9, the most cited ten articles about the dematel method are given. the most cited article in dematel with 570 citations is tzeng g.h., et al "evaluating intertwined effects in e-learning programs: a novel hybrid mcdm model based on factor analysis and dematel" (2007). in this article, the factors of the e-learning program are analyzed. the second most cited article with 500 citations, wu, w.w. & lee, y.t. "developing global managers' competencies using the fuzzy dematel method" (2007). the article by buyukozkan & cifci (2012) titled "a novel hybrid mcdm approach based on fuzzy dematel, fuzzy anp, and fuzzy topsis to evaluate green suppliers" is ranked third with 444 citations biblometric analysis of dematel method 97 t a b le 9 : t h e m o st c it e d t w e n ty a rt ic le s a b o u t th e d e m a t e l m e th o d a u th o r s t it le p u b li c a ti o n y e a r s o u r c e t it le t o ta l c it a ti o n s a v e r a g e p e r y e a r t z e n g g .h ., e t a l. e v a lu a ti n g i n te rt w in e d e ff e c ts i n e -l e a rn in g p ro g ra m s: a n o v e l h y b ri d m c d m m o d e l b a se d o n f a ct o r a n a ly si s a n d d e m a t e l 2 0 0 7 e x p e rt s y st e m s w it h a p p li ca ti o n s 5 7 0 4 0 ,7 1 w u , w .w . & l e e , y .t . d e v e lo p in g g lo b a l m a n a g e rs ' co m p e te n ci e s u si n g t h e f u z z y d e m a t e l m e th o d 2 0 0 7 e x p e rt s y st e m s w it h a p p li ca ti o n s 5 0 0 3 5 ,7 1 b u y u k o z k a n , g . & c if ci , g . a n o v e l h y b ri d m c d m a p p ro a ch b a se d o n f u z z y d e m a t e l , fu z z y a n p a n d f u z z y t o p s is t o e v a lu a te g re e n s u p p li e rs 2 0 1 2 e x p e rt s y st e m s w it h a p p li ca ti o n s 4 4 4 4 9 ,3 3 w u , w .w . c h o o si n g k n o w le d g e m a n a g e m e n t st ra te g ie s b y u si n g a co m b in e d a n p a n d d e m a t e l a p p ro a ch 2 0 0 8 e x p e rt s y st e m s w it h a p p li ca ti o n s 2 8 6 2 2 l in , r .j . u si n g f u z z y d e m a t e l t o e v a lu a te t h e g re e n s u p p ly c h a in m a n a g e m e n t p ra ct ic e s 2 0 1 3 jo u rn a l o f c le a n e r p ro d u ct io n 2 6 6 3 3 ,2 5 l in , c .j . & w u , w .w . a c a u sa l a n a ly ti ca l m e th o d f o r g ro u p d e ci si o n -m a k in g u n d e r fu z z y e n v ir o n m e n t 2 0 0 8 e x p e rt s y st e m s w it h a p p li ca ti o n s 2 6 0 2 0 h su , c .w .,e t a l. u si n g d e m a t e l t o d e v e lo p a c a rb o n m a n a g e m e n t m o d e l o f su p p li e r se le ct io n i n g re e n s u p p ly c h a in m a n a g e m e n t 2 0 1 3 jo u rn a l o f c le a n e r p ro d u ct io n 2 5 2 3 1 ,5 c h a n g , b ., e t a l. f u z z y d e m a t e l m e th o d f o r d e v e lo p in g s u p p li e r se le ct io n cr it e ri a 2 0 1 1 e x p e rt s y st e m s w it h a p p li ca ti o n s 2 4 9 2 4 ,9 s h ie h , j ., e t a l. a d e m a t e l m e th o d i n i d e n ti fy in g k e y s u cc e ss f a ct o rs o f h o sp it a l se rv ic e q u a li ty 2 0 1 0 k n o w le d g e -b a se d s y st e m s 2 4 7 2 2 ,4 5 t se n g , m .l . a c a u sa l a n d e ff e ct d e ci si o n m a k in g m o d e l o f se rv ic e q u a li ty e x p e ct a ti o n u si n g g re y -f u z z y d e m a t e l a p p ro a ch 2 0 0 9 e x p e rt s y st e m s w it h a p p li ca ti o n s 2 2 2 1 8 ,5 koca and yıldırım/decis. mak. appl. manag. eng. 4 (1) (2021) 85-103 98 the most commonly used keywords in dematel method are shown in figure 5. keyword analysis shows common keywords used by authors. accordingly, the most used keyword in dematel is seen as "model". in addition, the words "dematel", "selection", "management", "performance", "anp", "decision making" are the most common keywords. figure 5. the most commonly used keywords in dematel method 5. conclusion the focus of this study was to conduct a bibliometric analysis of global studies on the dematel method, one of the mcdm methods. 1963 documents obtained from the wos database between 1999-2020 were analyzed with the r studio program. in the study, the annual research outputs of the researches published on the dematel method, document types, countries, important journals and authors contributing to the field, the most efficient universities, and which fields of science the method is used in are shown. in the dematel method, china (553), taiwan (519), iran (251), india (241), turkey (184) are among the top five countries. the most cited country in his studies was observed as taiwan (12884). with the cooperation of taiwan and china 74, it is in the main position of international cooperation. in the analysis, it was seen that he was actively participating in researches related to the dematel method in other countries. the most prolific authors in the field are tzeng g.h. was seen as. next comes tseng m.l. (38), dincer h. (36), liou j.j.h. (36), huang c.y. (35). when we look at the web of science categories, it is seen that studies are concentrated in fields such as computer science and artificial intelligence, environmental science, operations research and management science, management, green sustainable technologies, electrical electronics engineering, and industrial engineering. biblometric analysis of dematel method 99 in the studies related to the field, the journal "journal of cleaner production" ranks at the top with 96 studies. then, the magazine "sustainability" takes second place with 90 studies, and the magazine "expert systems with applications" takes third place with 77 studies. the most cited journal is “expert systems with applications” with 7074 citations. the most productive university is national chiao tung university (taiwan) with 102 studies. next is islamic azad university (iran) with 90 studies, followed by nan kai university technology (china) in three with 86 studies. when we look at the conceptual structure of the studies, it is seen that they concentrate on words such as model, dematel, selection, management, performance, anp, decision making, fuzzy dematel. the findings of the study show the development of the studies in the dematel method, which is the mcdm method. as a result of the evaluations, it was seen that the studies on the dematel method were quite dynamic. it is possible to say that the studies on this method will increase in the following years. the methodology used can be applied to other methods and other topics. overall, the findings of this analysis provide a general picture of the evolution of the dematel method. this can assist practitioners and academics in identifying and evaluating efforts to advance research in these areas. this will help develop new lines of research for the future and advance the use of these methods in more applications. the methodology used can be applied to other mcdm methods or other topics. using the relative advantages of different bibliometric tools, the use of variables can be expanded. author contributions: each author has participated and contributed sufficiently to take public responsibility for appropriate portions of the content. funding: this research received no external funding. conflicts of interest: the authors declare no conflicts of interest. references abu-taha, r. (2011). multi-criteria applications in renewable energy analysis: a literature review. in 2011 proceedings of picmet'11: technology management in the energy smart world (picmet) (pp. 1-8). ieee. aytaç, m., & gürsakal, n. (2015). karar verme. dora basım yayın dağıtım aş, bursa. blanco-mesa, f., merigó, j. m., & gil-lafuente, a. m. (2017). “fuzzy decision making: a bibliometric-based review.” journal of intelligent & fuzzy systems, 32(3), 2033-2050. bai, c., & sarkis, j. (2013). a grey-based dematel model for evaluating business process management critical success factors. international journal of production economics, 146(1), 281-292. bragge, j., korhonen, p., wallenius, h., & wallenius, j. (2010).bibliometric analysis of multiple criteria decision making/multiattribute utility theory. in multiple criteria decision making for sustainable energy and transportation systems (pp. 259-268). springer, berlin, heidelberg. koca and yıldırım/decis. mak. appl. manag. eng. 4 (1) (2021) 85-103 100 büyüközkan, g., & çifçi, g. (2012). “a novel hybrid mcdm approach based on fuzzy dematel, fuzzy anp and fuzzy topsis to evaluate green suppliers.”expert systems with applications, 39(3), 3000-3011. chang, b., chang, c. w., & wu, c. h. (2011). “fuzzy dematel method for developing supplier selection criteria.” expert systems with applications, 38(3), 1850-1858. chen, f. h., hsu, t. s., & tzeng, g. h. (2011). a balanced scorecard approach to establish a performance evaluation and relationship model for hot spring hotels based on a hybrid mcdm model combining dematel and anp. international journal of hospitality management, 30(4), 908-932. chowdhury, p., & paul, s. k. (2020). “applications of mcdm methods in research on corporate sustainability.”management of environmental quality: an international journal. çelikbilek, y.(2018).” çok kriterli karar verme yöntemleri”. nobel akademik yayıncılık eğitim danışmanlık tic. ltd. şti. ankara. çetinkaya bozkurt, ö. ve çetin, a. (2016), “girişimcilik ve kalkınma dergisi’nin bibliyometrik analizi”, girişimcilik ve kalkınma dergisi, 11(2), 229-263. guerrero-baena, m. d., gómez-limón, j. a., & fruet cardozo, j. v. (2014). “are multicriteria decision making techniques useful for solving corporate finance problems? a bibliometric analysis.”revista de metodos cuantitativos para la economia y la empresa, 17, 60-79. ellegaard, o., & wallin, j. a. (2015). “the bibliometric analysis of scholarly production: how great is the impact?.” scientometrics, 105(3), 1809-1831. gul, m., celik, e., aydin, n., gumus, a. t., & guneri, a. f. (2016). “a state of the art literature review of vikor and its fuzzy extensions on applications.” applied soft computing, 46, 60-89. govindan, k., khodaverdi, r., & vafadarnikjoo, a. (2015). intuitionistic fuzzy based dematel method for developing green practices and performances in a green supply chain. expert systems with applications, 42(20), 7207-7220. hotamışlı, m., & erem, i. (2014). “bibliometric analysis of the articles published in journal of accounting and finance.” the journal of accounting and finance, 16(63), 119. hsu, c. w., kuo, t. c., chen, s. h., & hu, a. h. (2013). “using dematel to develop a carbon management model of supplier selection in green supply chain management.” journal of cleaner production, 56, 164-172. huang, c. y., shyu, j. z., & tzeng, g. h. (2007). reconfiguring the innovation policy portfolios for taiwan's sip mall industry. technovation, 27(12), 744-765. kahraman, c., onar, s. c., & oztaysi, b. (2015). “fuzzy multicriteria decision-making: a literature review.” international journal of computational intelligence systems, 8(4), 637-666. karaoğlan, s. (2016). “dematel ve vikor yöntemleriyle dış kaynak seçimi: otel i̇şletmeciliği örneği”, akademik bakış dergisi, 55,9-24. biblometric analysis of dematel method 101 lawani, s. m (1981). “bibliometrics: its theoretical foundations, methods and applications”, international journal of libraries and information services, cilt: 31, 4, 294-315. liao, h., tang, m., zhang, x., & al-barakati, a. (2019). “detecting and visualizing in the field of hesitant fuzzy sets: a bibliometric analysis from 2009 to 2018”, international journal of fuzzy systems, 21(5), 128. lin, c.l. & tzeng, g.h. (2009). “a value-created system of science (technology) park by using dematel”, expert systems with applications, 36(6), 9683-9697. lin, c. j., & wu, w. w. (2008). “a causal analytical method for group decision-making under fuzzy environment.” expert systems with applications, 34(1), 205-213. lin, r. j. (2013). “using fuzzy dematel to evaluate the green supply chain management practices.” journal of cleaner production, 40, 32-39. liu, h. c., liu, l., liu, n., & mao, l. x. (2012). risk evaluation in failure mode and effects analysis with extended vikor method under fuzzy environment. expert systems with applications, 39(17), 12926-12934. liu, w., & liao, h. (2017). “a bibliometric analysis of fuzzy decision research during 1970–2015.” international journal of fuzzy systems, 19(1), 1-14. liu, h. c., chen, x. q., duan, c. y., & wang, y. m. (2019). “failure mode and effect analysis using multi-criteria decision making methods: a systematic literature review.”computers & industrial engineering, 135, 881-897. mardani, a., jusoh, a., nor, k., khalifah, z., zakwan, n., & valipour, a. (2015). “multiple criteria decision-making techniques and their applications–a review of the literature from 2000 to 2014.” economic research-ekonomska istraživanja, 28(1), 516-571. merigó, j. m., & yang, j. b. (2017). “a bibliometric analysis of operations research and management science.”omega, 73, 37-48. morkūnaitė, ž.,kalibatas, d., & kalibatienė, d. (2019). “a bibliometric data analysis of multicriteria decision making methods in heritage. buildings”, journal of civil engineering and management, 25(2), 76-99. osareh, f. (1996). “bibliometrics, citation analysis and co-citation analysis: a review of literatüre.” i. libri, 46(3), 149-158. peng, x., & dai, j. (2018). “a bibliometric analysis of neutrosophic set: two decades review from 1998 to 2017”, artificial intelligence review, 1-57. renganath, k., & suresh, m. (2016, december). “supplier selection using fuzzy mcdm techniques: a literature review.” in 2016 ieee international conference on computational intelligence and computing research (iccic) (1-6). ieee. seyed-hosseini, s. m., safaei, n., & asgharpour, m. j. (2006). reprioritization of failures in a system failure mode and effects analysis by decision making trial and evaluation laboratory technique. reliability engineering & system safety, 91(8), 872-881. shgeh, j.,wu, h. & huang, k. (2010). “a dematel method in identifying key success factors of hospital service quality”, knowlwdge-based systems. 23(3): 277-282. koca and yıldırım/decis. mak. appl. manag. eng. 4 (1) (2021) 85-103 102 shieh, j. i., wu, h. h., & huang, k. k. (2010). “a dematel method in identifying key success factors of hospital service quality.” knowledge-based systems, 23(3), 277282. stevens, r.e. (1953). “characteristics of subject literatures”, chicago: american college and research library monography series 7. stojčić, m., zavadskas, e. k., pamučar, d., stević, ž., & mardani, a. (2019). “application of mcdm methods in sustainability engineering: a literature review 2008–2018.” symmetry, 11(3), 350. tramarico, c. l., salomon, v. a. p., & marins, f. a. s. (2015). “analytic hierarchy process and supply chain management: a bibliometric study.”procedia computer science, 55, 441-450. tsai, w. h., & chou, w. c. (2009). selecting management systems for sustainable development in smes: a novel hybrid model based on dematel, anp, and zogp. expert systems with applications, 36(2), 1444-1458. tseng, m. l. (2009). a causal and effect decision making model of service quality expectation using grey-fuzzy dematel approach. expert systems with applications, 36(4), 7738-7748. tzeng, g. h., chiang, c. h., & li, c. w. (2007). “evaluating intertwined effects in elearning programs: a novel hybrid mcdm model based on factor analysis and dematel.” expert systems with applications, 32(4), 1028-1044. yang, j. l., & tzeng, g. h. (2011). an integrated mcdm technique combined with dematel for a novel cluster-weighted with anp method. expert systems with applications, 38(3), 1417-1424. yıldırım, b.f., & önder, e. (2018). “operasyonel, yönetsel ve stratejik problemlerin çözümünde çok kriterli karar verme yöntemleri”. dora basım yayın dağıtım aş, bursa. yu, d., wang, w., zhang, w., & zhang, s. (2018). “a bibliometric analysis of research on multiple criteria decision making”, current science, 114(4), 747-758. wu, w. w., & lee, y. t. (2007). “developing global managers’ competencies using the fuzzy dematel method.” expert systems with applications, 32(2), 499-507. wu, w. w. (2008). “choosing knowledge management strategies by using a combined anp and dematel approach.” expert systems with applications, 35(3), 828-835. xia, x., govindan, k., & zhu, q. (2015). analyzing internal barriers for automotive parts remanufacturers in china using grey-dematel approach. journal of cleaner production, 87, 811-825. zavadskas, e. k., turskis, z., & kildienė, s. (2014). “state of art surveys of overviews on mcdm/madm methods.” technological and economic development of economy, 20(1), 165179. zhou, q., huang, w., & zhang, y. (2011). identifying critical success factors in emergency management using a fuzzy dematel method. safety science, 49(2), 243252. biblometric analysis of dematel method 103 zyoud, s. h., & fuchs-hanusch, d. (2017). “a bibliometric-based survey on ahp and topsis techniques”, expert systems with applications, 78, 158-181. © 2018 by the authors. submitted for possible open access publication under the terms and conditions of the creative commons attribution (cc by) license (http://creativecommons.org/licenses/by/4.0/). decision making: applications in management and engineering vol. 4, issue 2, 2021, pp. 200-224. issn: 2560-6018 eissn: 2620-0104 doi: https://doi.org/10.31181/dmame210402200k * corresponding author. e-mail addresses: marija.kuzmanovic@fon.bg.ac.rs (m. kuzmanovic), milena.vukic12@gmail.com (m. vukic) incorporating heterogeneity of travelers’ preferences into the overall hostel performance rating marija kuzmanović1* and milena vukić2 1 university of belgrade, faculty of organizational sciences, belgrade, serbia; 2 the college of hotel management in belgrade, belgrade, serbia; received: 4 february 2021; accepted: 27 june 2021; available online: 12 july 2021. original scientific paper abstract: hostels have become a very popular form of accommodation and their varieties have grown steadily in recent years. to ensure the sustainability of this business model, it is necessary to understand the main drivers influencing travelers to choose a hostel accommodation. for this purpose, we conducted an online survey using convenience sampling and purposive sampling techniques. respondents' preferences to six hostel attributes (cleanliness, location, staff, atmosphere, facilities, and cancellation policy) were determined using discrete choice analysis. sample results showed that the most important attributes are cleanliness and location, while the atmosphere is the least important one. however, widespread heterogeneity in preferences was observed, and cluster analyzes identified three distinct groups of travelers: “cleanliness sticklers”, “location demanders” and “party seekers”. facilities and atmosphere were found to be very important attributes for particular clusters. these findings can help design a marketing strategy for each of the identified segments to ensure sustainable business. finally, we have proposed a new approach to calculating the hostel overall rating based on attribute importance, which shows much better discriminatory power compared to the traditional average-based approach. key words: hostel; discrete choice analysis; attribute importance scores; preference-based clustering; simulation; weighted performance rating. 1. introduction over the last six decades, tourism industry become one of the largest economic sectors in the world (mihalic, 2014). the importance of the tourism industry is evident in both developed and developing countries, which is best reflected through a number of direct and indirect impacts on national economies (world travel and tourism mailto:marija.kuzmanovic@fon.bg.ac.rs mailto:milena.vukic12@gmail.com incorporating heterogeneity of travelers’ preferences into the overall hostel performance… 201 council, 2018). further strong growth of the tourism industry around the world was expected, but the appearance of the covid-19 pandemic temporarily hindered it. farr (2020) states that after the end of the pandemic, it will take about 18 to 24 months for daily tourist activities to return to the level before the pandemic (nguyen, 2020). the structure of the tourism market has changed significantly and continues to change over the years, with increasing attention paid to the concept of sustainability and related topics such as circular economy, collaborative consumption, sharing economy, and low-income consumers targeting (lemus-aguilar et al., 2019). the development of the internet has significantly contributed to these trends. nowadays, tourists have access to more information, they are more mobile, and are more willing to experiment with unconventional forms of travel. the expansion of low-cost airlines has increased the number of both available destinations and flights between the two destinations, leading to further price reductions due to growing competition. a study carried out by eugenio-martin and inchausti-sintes (2016) shows that savings achieved by low-cost transportation are at least partially transferred to spending on the destination itself. positive changes are also noticeable in low-budget accommodations such as hostels, which show more online penetration than hotels and apartment rentals (muñoz-fernández et al., 2016). despite the outbreak of the covid19 pandemic in 2020 that negatively affected various industries, including tourism, travelers are expected to travel again, and their demands will affect the future of affordable accommodation. accordingly, hostels must be prepared to respond to these demands in the right way. although initially the cost of accommodation was the main reason for travelers to choose hostels, over the years, the type of hostel guests changed and their motives and preferences became more diverse. to increase guest satisfaction, some hostels have launched a number of specific services such as self-serving facilities, group social and sports activities, the ability to rent certain equipment. some hostels have recognized the importance of environmental sustainability and are taking action to promote such activities. the development of technology and digitization made it possible for different hostel visitors to exchange impressions and accommodation reviews. on the one hand, this provides the guests with a certain level of security when choosing an accommodation and helping them find accommodations that match their desires. hostel owners, on the other hand, receive feedback from their clients and can eliminate potential weaknesses on time, as in the internet era only a few negative reviews can have serious consequences on business success (martins et al., 2018). the fact is that hostels have become very attractive to investors in recent years, that their number and varieties are constantly growing, and that even more prosperity is expected in the future through improvement of product quality and product offer offerings. in such a competitive environment, the concept of sustainability becomes crucial. unlike traditional entrepreneurship, which focuses mainly on economic development, sustainable entrepreneurship and sustainable business models aim to balance economic, social and environmental goals (belz & binder, 2017). in this study, we focus on the key factors influencing hostel guests' satisfaction, which is closely related to the economic and social dimension of sustainable business. we sought to identify individuals' preferences for key hostel characteristics, identify the most important factors that influence their decision when choosing a hostel, and investigate whether these factors depend on the demographics of the respondents or their habits and attitudes. for that purpose, in this study discrete choice analysis (dca) was employed. dca is an approach for identifying the relative importance of attributes when the individuals choose between comparable products or services based on specific features. it has been successfully used for the analysis of individual m. kuzmanović et al./decis. mak. appl. manag. eng. 4 (2) (2021) 200-224 202 choice behavior in many fields such as economics, marketing, education, transportation, environmental management, and healthcare (rakotonarivo et al., 2016; popović et al., 2018; kuzmanovic et al., 2020). it has been applied also in the tourism industry, but primarily to determine guests' preferences and willingness to pay for a hotel room attributes and preferences towards tourist destination (capitello et al., 2017; chang et al., 2018; gonzález et al., 2018; kim & park, 2017; oppewal et al., 2015; vukic et al., 2015). however, so far dca has not been used to identify the tradeoffs that respondents are willing to make when it comes to different factors that affect their choice of hostel. furthermore, using dca it is possible to identify whether the guests’ preferences are heterogeneous, but also to calculate the real overall hostel rating, taking into account that not all factors are equally relevant to certain groups of guests. if there are differences in factors that affects the satisf